The article offers practical advice to go along with this framing, like configuring AI services to write/speak in a more robotic tone. I think that's a decent path to try.
This is actually one of the things that made LLMs more usable for me. The default tone and style of writing they tend to use is nauseatingly annoying and buries information in prose that sounds like a corporate presentation.
But liability and ethics cannot be put aside. If treatments were free of cost and perfectly address problems, then a correct diagnosis would always lead to the optimal patient outcome. In that scenario, AI diagnosis will be like code generation and go asymptotic to perfection as models improve.
But a doctor's job in the real world today is to navigate a total mess of uncertainty: about the expected outcome of treatments given a patient's age and other peoblems. About the psychological effect of knowing about a problem that they cannot effectively treat. Even about what the signals in the chart and x-ray mean with any certainty.
We are very far from having unit test suites for medical problems.
Liability would put all this to bed. Is OpenAI liable for malpractice if it misdiagnoses your issue? No? Then it’s no substitute. Being right is not nearly as important as being responsible. Unfortunately, there is widespread perception that software defects are acceptable, whereas operating on the wrong leg isn’t.
Sure, but my anecdotal experience is that doctors do this regularly in real life, especially when choosing to diagnose or ignore problems that are unlikely to kill an aging patient before some other larger issue does.
>AI diagnosis will be like code generation and go asymptotic to perfection as models improve
uhhhhhhh, I'm pretty behind-the-times on this stuff so I could be the one who's wrong here but I don't believe that has happened????
But anyways that nitpicking aside I agree with you wholeheartedly that reducing the doctor's job to diagnosis (and specifically whatever subset of that can be done by a machine-learning model that doesn't even get to physically interact with the patient) is extremely myopic and probably a bit insulting towards actual doctors.
Actually security clearances do include an NDA. When I signed mine it contained an amusing clause, something to the effect of you will not share classified information until 70 years have passed or you die, whichever is _later_.
Could the person who drafted that have been contemplating something like a Dead Man's Switch? Even if so, not sure how it would have much teeth in terms of consequences after you're dead.
Or some weird scenario where an individual technically dies but is then brought back to life?
Or maybe they secretly recruit zombies and only drafted one set of employment contracts.
It seems obvious that a humanoid robot system or other truly general-purpose AI will need a stack of model types that work in concert. An LLM could be analogous to the conscious part of our brains, while many smaller and possibly frequently updateable models might provide "muscle memory" and reflexes.
If that becomes the case, then similarly built humanoid robots might have differentiated capabilities depending on their experience, just like us.
I think ultimately we're going to see structures that start to approximate the Type 1 and Type 2 thinking systems in humans - fast, deterministic models for microsecond and millisecond scale, and something in the current LLM ballpark for tactical and short term. We probably don't have a model that is out of the box good enough for the medium-term and long term planning, I think that's the most obvious gap in this kind of tower-of-hanoi style model stack.
I think the LLM is more like the 'internal monologue'. I am quite unqualified to claim this since I don't have one as far as I can tell, but I understand it's constantly observing and providing 'first draft' thinking approximating LLM quality
An LLM is more like the unconscious part of my brain. It’s my gut. It shits out answers using an ungodly amount of parallel processing and it’s often right.
But it also hallucinates thoughts and beliefs too, and that’s where the conscious parts have to intervene.
But the conscious parts are expensive to run and I can’t multi-task that.
The conscious parts also degrade first when I don’t get enough sleep.
Did it truly take someone else to externalize the mechanics of cognition into a machine for you, for you to become able to notice them and become interested in them?
And then to remain focused on the machine that you see, rather than the machine that you are.
This reads similar to the Trump 4D chess excuse. It seems unlikely that this is a ruse, and much more likely that OpenAI's market cap is supported by doing "all the things" to exploit the huge monthly average user base that OpenAI has accumulated.
The migration sharing is admirable and useful teaching, thank you!
I see the DigitalOcean vs Hetzner comparison as a tradeoff that we make in different domains all day long, similar to opening your DoorDash or UberEats instead of making your own dinner(and the cost ratio is similar too).
I work in all 3 major clouds, on-prem, the works. I still head to the DigitalOcean console for bits and pieces type work or proof of concept testing. Sometimes you just want to click a button and the server or bucket or whatever is ready and here's the access info and it has sane defaults and if I need backups or whatnot it's just a checkbox. Your time is worth money too.
> Sometimes you just want to click a button and the server or bucket or whatever is ready and here's the access info and it has sane defaults and if I need backups or whatnot it's just a checkbox. Your time is worth money too.
You're describing Hetzner Cloud, which has been like this for many years. At least 6.
Hetzner also offers Hetzner Cloud API, which allows us to not have to click any button and just have everything in IaC.
I personally find Hetzner's Console even better than DigitalOcean's one, especially since DigitalOcean now looks like three slightly different consoles depending on which page you're in. It feels like they've been migrating to a new system, but they haven't finished it yet.
One is about all the steps of zero downtime migration. It's widely applicable.
The other is the decision to replace a cloud instance with bare metal. It saves a lot in costs, but also the loss of fast failover and data backups is priced in.
If I were doing this, I would run a hot spare for an extra $200, and switched the primary every few days, to guarantee that both copies work well, and the switchover is easy. It would be a relatively low price for a massive reduction of the risk of a catastrophic failure.
Cute; I'd somehow missed ever seeing that one. The omitted con of electric engines (costs way more to build batteries than a gas tank so you're likely to have more expensive storage AND less of it) makes the XKCD joke miss. BUT... since there's probably something that Digital Ocean offers that Hetzner doesn't, that might actually be a very appropriate XKCD for the situation, precisely because there's a tradeoff the XKCD didn't mention. (I haven't used Hetzner so I don't know firsthand what the tradeoff is, but a quick search suggests Hetzner doesn't do Kubernetes so that might be the tradeoff for some people. Or it might be something else, everybody has their own situation).
Ah the golden rule, classic, but it is so simplistic that it could encourage bad behavior. You can never assume that something you want or don't want applies to anyone else.
I think a better formulation is the so-called "platinum rule", i.e. to treat people as they want to treated (with the important qualification that you ∈ people). But even then it's not without issue (what if someone's wants are harmful to them, e.g. a child refusing to eat anything but candy?), and it's still a far cry from illuminating "objective moral principles" and fairly useless as a calculus for balancing different people's competing interests.
They look kind of translucent to me, maybe the first of this kind of slug just had a digestive problem that didn't break down the chloroplasts, and the minimal energy through their bodies made those individuals more successful because they didn't need to eat as often as those who digested theirs. Yada yada other errors among the indegestible-chloroplast population showed further advantages when it's closer to the skin, they outcompeted their peers, etc.
Well we can easily see that the "abundance" people are wrong(for example everyone can't have a penthouse apartment overlooking Central Park, no matter how capable the robots become).
An alternative possibility that inequality is about to explode between those who profit from AI/robotic labor and those displaced by it.
reply