Hacker Newsnew | past | comments | ask | show | jobs | submit | muvlon's commentslogin

Let's Encrypt has to be down for days before people begin to feel the pain. DNS is very different, it breaks stuff immediately everywhere.

No it doesn't. DNS breaks as soon as TTLs run out. It's your choice to set them so low that stuff breaks immediately.

What do you recommend then? DNS doesn't usually change that often, but if you mess it up when it does, you're in for some pain if TTLs are high!

Not the one you're replying to, but I'd keep TTL high normally and lower it one TTL ahead of a planned change.

I would define high as "double time needed to fix a dns issue" and account for weekends

This is the way.

Unfortunately you can't set DNS TTL arbitrarily high (or low) without some resolvers ignoring your suggestion and using arbitrary values.

Most historical outages lasted minutes or hours. One arguably lasted much longer, when someone lost control of their servers due to civil war.

I haven't followed this closely, but have there been any... shall we say plain outages longer than six hours? That's not an outrageous TTL. Or a day.


This assumes that the host name you want has been recently queried. If it's not cached, good luck...

TL;DR: If it's not cached, does it really matter if it's offline for some time?

Long version:

If you're so popular all around that you really really want a very very short TTL, people will query all the time from all the places that "count", won't they? So it's gonna be cached.

If you're not so popular or not all around, what does it matter even if you had a very very short TTL? You're not loosing much.


This is one category of good alerts, but not everything.

I think alerts are to ops what tests are to dev. You have "unit alerts" for some small thing like the disk usage on a single host, "integration alerts" like literally "does the page load?" and then what you describe are "regression alerts", trying to prevent something that went wrong once from going wrong again. These are great but just like you wouldn't have 100% regression tests, I think it's also smart to try to get ahead of failures and have some common sense alerts defined.


While I hate suid as much as the next person, it's really not the problem here.

The bug that is being exploited gives you basically arbitrary page cache poisoning. At that point it's already game over. Patching a suid program is maybe the easiest way to get a root shell from that but far from the only.


And the Pentagon has historically gotten away with damn near everything even in the judicial branch by appealing to national security.

What problem is this actually solving? I've deployed DHCP countless times in all sorts of environments and its "statefulness" was never an issue. Heck, even with SLAAC there's now DAD making it mildly stateful.

Don't get me wrong, SLAAC also works fine, but is it solving anything important enough to justify sacrificing 64 entire address bits for?


* privacy addresses are great

* deriving additional addresses for specific functions is great (e.g. XLAT464/CLAT)

* you don't get collisions when you lose your DHCP lease database

* as Brian says, DHCP wasn't quite there yet when IPv6 was designed

* ability to proactively change things by sending different RAs (e.g. router or prefix failover, though these don't work as well as one would hope)

* ability to encode mnemonic information into those 64 bits (when configuring addresses statically)

* optimization for the routing layers in assuming prefixes mostly won't be longer than /64

… and probably 20 others that don't come to mind immediately. I didn't even spend seconds thinking about the ones I listed here.


Privacy addresses... Isn't it silly to talk of privacy if the prefix doesn't change?


Absolutely schizo.

"I wish to participate in a global telecommunications network and I wish to connect immediately to all my friends and be available to them 24/7 and I wish to play games with strangers across the country and I wish to receive all my email within 300ms with no spam and I wish to watch the latest news from Iran in 4K streaming Dolby"... but priiiiivacy!


SEND secures NDP by putting a public key into those 64 bits, and also having big sparse networks renders network scanning rather useless at finding vulnerable hosts, so there are reasons to make subnets /64 other than SLAAC.

Also we can always reduce the standard subnet size in 4000::/3 if we ever somehow run out of space in 2000::/3 (and if we don't then we didn't sacrifice anything to use /64s).


DHCP requires explicit configuration; it needs a range that hopefully doesn't conflict with any VPN you use; it needs changes if your range ever gets too small; and it's just another moving part really.

With SLAAC, it's just another implementation detail of the protocol that you usually don't have to even think about, because it just works. That is a clear benefit to me.


When it fail, you find there is no option to tune its behaviour.

Plug in a rough router and see quickly you can find it.


What kind of failure are you referring to? What would you want to tune? You can still easily locate all devices on your network.


This doesn't register as corpo talk to me, more tongue-in-cheek nerdy mission control talk. See also "rapid unscheduled disassembly".


There are a bunch of subbrands but there are also a lot of genuine small Android phone companies, especially in China.

Some of these serve some interesting niches that might now disappear due to this DRAM supply issue, e.g. Unihertz for extra small phones or CAT for extra durable worksite phones.


Is there any 'guide' to this ecosystem...because 'odd niche communications gear' is always interesting.


Notably they didn't fully shed it, they compartmentalized it. They proposed to split the standard into two parts: r7rs-small, the more minimal subset closer in spirit to r5rs and missing a lot of stuff from r6rs, and r7rs-large, which would contain all of r6rs plus everyone's wildest feature dreams as well as the kitchen sink.

It worked remarkably well. r7rs-small was done in 2013 and is enjoyed by many. The large variant is still not done and may never be done. That's no problem though, the important point was that it created a place to point people with ideas to instead of outright telling them "no".


> because addicts pay up.

I think it turns out they don't, not really anyway. And that's exactly why Sora is dead. They figured out that addictive AI slop has been so thoroughly commoditized that you can get it on a ton of other platforms for free, so people don't want to pay for it.


Sometimes they do pay up. Google Gemini estimates that 25% of active daily YouTube users pay for ad free service. I know my wife and I do, and we watch a huge range of YouTube material more hours a month than all the other streaming services we subscribe to. There is no area of human knowledge or human interest that YouTube doesn’t have a ton of material for; and of course, the animal videos… The ironic thing in the subject of Sora service being cancelled is that neither my wife or I watch AI generated material.


I think the real answer is that Sora-style AI slop videos just aren't as addictive as we thought they'd be.

I let my kids have access to the app in the hope they would be inoculated against being obsessed with AI video and it actually worked. They got bored in like 2 days.

It simply doesn't compare well with handcrafted short form videos that are already plentiful on TikTok (which I absolutely don't let my kids watch).


Yes, fortunately slop is pretty unwatchable after the novelty wears out. Even the lowest common denominator stuff NFLX churns out is in a different league.

I was talking to other people re: difference between code & other domains. Code is, for customer, what it does.. not how it does it. That is - we can get mad about style, idioms, frameworks, language, indentation, linting, verbosity, readability, maintainability but.. it doesn't really matter for the customer if the code does the thing its supposed to do.

Many things like entertainment products don't work that way. For a good book/movie/show, a good plot (the what) is table stakes. All of the how matters - dialogue, writing style, casting, camera/sound/lighting work, directing, pacing, sound track, editing, etc.

For short format low stakes stuff like online ads, then the AI slop actually probably works however.

Same for say making a power point. LLMs can quickly spit out a passable deck I am sure. For a lot of BS job use cases, that's actually probably fine. But if it is the key element of a sales pitch, really it's just advanced auto-formatting/complete, and the human element is still the most important part. For example I doubt all the AI startups are using AI generated sales pitches when they go to VC for funding.


IMO slop fits best for "art that isn't the point".

A promotional flyer for an event could work perfectly well in plain text. The art is pure social signal - this event is thrown by the type of people who put art in a certain style on their flyers. Your eye is caught and your brain almost immediately discards the art.

Same with power point - you make a power point so that everyone knows this decision was made by the type of people who make power points. A txt file and a png would have gotten the job done.

Same also with memes - you could just _say_ a lot of these jokes, but they're funnier with a hastily-edited image alongside.


Agreed, it's good at placeholder art for which entertainment consumption is not the point. Clip Art for the new generation.


>> you can get it on a ton of other platforms for free, so people don't want to pay for it.

What happens when other platforms start trying to get people to pay? I think there's a race to find a revenue stream for this stuff. As soon as one company can find a way to monetize it, they'll all end up doing it. Right now, we're in a place where companies are losing so much money, they have to decide how much they can lose before they pull the plug.

OpenAI just proved you cannot burn money indefinitely.


The monetization of social media has always been about steering otherwise non paying users into making purchases elsewhere. So if the AI slop can make people spend money on other products that's accomplished the goal.


I actually think this isn't even surprising from OpenBSD philosophically. They still subscribe to the Unix philosophy of old, moreso than FreeBSD and much much more than Linux.

That is, "worse is better" and it's okay to accept a somewhat leaky abstraction or less helpful diagnostics if it simplifies the implementation.

This is why `ed` doesn't bother to say anything but "?" to erroneous commands. If the user messes up, why should it be the job of the OS to handhold them? Garbage in, garbage out. That attitude may seem out of place today but consider that it came from a time when a program might have one author and 1-20 users, so their time was valued almost equally.


> That attitude may seem out of place today

It absolutely doesn't. Everywhere I've worked we were instructed to give terse error messages to the user. Perhaps not a single "?", but "Oops, something went wrong!" is pretty widespread and equally unhelpful.


This is normal to return a terse message to a remote user via API. The remote user may be hostile, actively trying to gather information useful for breaking in.

But the local user who operates pf is already trusted, normally it would be root.

In either case, no error should be silently swallowed. Details should be logged in a secure way, else troubleshooting becomes orders of magnitude harder.


> That attitude may seem out of place today

That attitude was out of place at every point. Now it was excusable when RAM and disk space was sparse, it isn't today, it have entirely drawbacks


Code size would balloon if you try to format verbose error messages. I often look at the binaries of old EPROMs. I notice that 1) the amount of ASCII text is a big fraction of the binary 2) still just categories (“Illegal operation”). For the 1970s, we’re talking user programs that fit in 2K.

I write really verbose diagnostic messages in my modern code.


There was also an implicit saving back then that an error message could be looked up in some other system (typically, a printed manual). You didn't need to write 200 chars to the screen if you could display something much shorter, like SYS-3175, and be confident that the user could look that up in the manual and understand what they're being told and what to do about it.

IBM were experts at this, right up to the OS/2 days. And as machines got more powerful, it was easy to put in code to display the extra text by a lookup in a separate file/resource. Plus it made internationalization very easy.


Even in that scenario that attitude seems out of place, considering a feature is implemented once and used many times.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: