It's good. I use codex right now. I purposefully slow down to at least read/ review the code it generates , unless I'm creating something intentionally throw away. It helps me most dealing with languages and frameworks I'm not familiar with. I also use chatgpt as a rubber duck and although it's often too verbose I enjoy it. There are still many times where it will not provide the key insight to a problem but once you supply it it instantly agrees like it was always obvious. On the other hand it has helped me grok many subjects especially academic
As soy is a nut, the chai soy milk lattes may have work. He's an Omnivore, not a Necrophage:
> Bigfoot are omnivores, "They eat both plants and meat. I've seen accounts that they eat everything from berries, leaves, nuts, and fruit to salmon, rabbit, elk, and bear
The "ad free option" is also called "muting and looking away". Don't let them trick you into thinking they have the right and control to shove anything they want into your mind.
Sponsorblock offers by far the best experience. It skips over channel intros and outros, engagement prompts, sponsored segments, tangents, etc (configurable per channel) and offers jumping to "highlight" (that is, the most important part of the video).
Highly ironic that the best experience is free, and no paid option gets even close. Tim Cook watching paid Youtube on Apple TV device has far worse experience than some random kid with Firefox and Sponsorblock gets for free.
When I ask chatgpt to create a mermaid diagram for me it regularly will add new lines to certain labels that will break the parse. If you then feed the parse error back to it the second version is always correct And it seems to exactly know the problem. There are some other examples where it will almost always get it wrong the first time but right if nudged to correct itself. I wonder what the underlying cause is
It responds with the statistically most probable text based on its training data, which happens to be different with the errors vs without. I suspect high-fidelity diagramming requires a different attention architecture from the common ones used in sentence-optimized models.
Today I asked Claude to create me a squidward looking out the window meme and it started generating HTML & CSS to draw squidward in a style best described as "4 year old preschooler". Not quite it yet.
The issue for Claude is that Anthropic don't have an imagen that I know of, so the only tool available for the LLM to draw something is to start doing vector stuff in CSS, which is very hard for it (see the pelicans).
Gemini, ChatGPT or Grok would find this a lot easier as they could gen an image inline, although IP restrictions might bite you. Even Grok wants to lecture on IP these days, but at least it's fairly trivial to jailbreak.
I think the problem should be defined as "why does it not loop back the errors from the first attempt so it can fix it on the second attempt" rather than why it fails to produce a fully correct implementation on the first pass.
you’ve got to give it a way (eg rendering with playwright and friends) and tell it to use that way to verify correctness. it’s not going to create the guard rail for you but if you provide it with one the output is much better.
This is one of the issues I’ve attempted to tackle with the Mermaid Studio plugin for IntelliJ.
It provides both syntax guides and syntax/semantic analysis as MCP Tools, so you can have an agent iteratively refine diagrams with good context for patterns like multi-line text and comments (LLMs love end-of-line comments, but Mermaid.js often doesn’t).
Observed from 5.2, on chatgpt.com. earlier versions did worse.. as in, they might take a few prompts to generate a parseable syntax. Newer versions just usually deliver one unparseable version then get it right second try. Likely I could prompt engineer to one shot but I think I would always need the specific warning about newlines.
I don't think it's about repeating the instructions, but rather providing feedback as to why it's not working.
I've noticed the same thing when creating an agentic loop, if the model outputs a syntax error, just automatically feed it back to the LLM and give it a second chance. It dramatically increases the success rate.
Mermaid is really bad about cutting off text after spaces, so you have to insert <br>s everywhere. I’m guessing this is getting rendered instead of escaped by your interface. Or just lost in translation at the tokenizer.
My playbook for JavaScript dates is.. store in UTC.. exchange only in UTC.. convert to locale date time only in the presentation logic. This has worked well for me enough that Im skeptical of needing anything else
For recording instantaneous events, that's usually sufficient. It's often not enough for scheduling. You can always present UTC or any other zone relative to some other zone, but you need to know that zone. Maybe you're going to a conference in another region and you want to know the time of a talk in that zone because that's more important than your zone. You either need to couple the zone with the time itself, or you need to refer to it. There are good reasons either way. Having an atomic time+zone type is basically trading space for time. When its embedded, you can just use it, which can be better than assuming UTC and then looking up the zone based on, say, the location of the venue.
Storing in UTC is lossy. You've lost information about the event's original UTC offset, at the very least, and probably also its original time zone. Most backends today have good ways to round-trip offset information, and still compare dates easily (as if they were normalized to UTC). Some backends can even round-trip timezone information in addition to offsets.
It's easy not to feel that loss as a big deal, but captured offsets can be very helpful for exactly debugging things like "what time did this user think this was?" versus time zone math (and DST lookups) from UTC. It can help debug cases where the user's own machine had missed a DST jump or was briefly on a different calendar or was traveling.
But a lot of the biggest gains in Temporal are the "Plain" family for "wall clock times"/"wall calendar dates" and breaking them apart as very separate data types. Does a UTC timestamp of "2026-02-01 00:00:00Z" mean midnight specifically and exactly or where you trying to mark "2026-02-01" without a time or timezone. Similarly I've seen data like "0001-01-01 12:10:00Z" mean "12:10" on a clock without the date or timezone being meaningful, but Temporal has a PlainTime for that. You can convert a PlainDate + a PlainTime + a Time Zone to build a ZonedDateTime, but that becomes an explicit process that directly explains what you are trying to do, versus accidentally casting a `Date` intended to be just a wall-clock time and getting a garbage wall-clock date.
That generally works for timestamps (Temporal Instant). But it doesn’t work for representing calendar dates with no time information (Temporal PlainDate) unless you add an additional strict convention like “calendar dates are always represented as midnight UTC”).
I have a scheduling system that allows users to specify recurring events. "Every Monday at 2pm." Which needs to be understood in the native timezone of that user and needs to be capable of being displayed in that timezone for all viewers or optionally in the native timezone of the viewing user.
The only time you need local dates is for scheduling. Stuff like “Report KPIs for each shift. Shifts start at 8:00 local time.”, or “send this report every day at 10:00 local time”, or “this recurring meeting was created by user X while they were in TimeZone Z, make sure meetings follow DST”.
The pathological case with scheduling is: It's 2015. You live in NYC. Your pal in Santiago, Chile says "hey next time you're here let's hang out." You say "great, I have a business trip there next April. Let's have dinner at 7pm on the 15th." They agree. You enter it into your calendar. If you store it as UTC, you're going to show up to dinner at the wrong time—the DST rules changed in between when you talked and when you expected dinner to happen. If you'd stored it as a local time with tzdb name America/Santiago you'd be there at the correct local time.
"Just use UTC" is another, albeit more subtle, falsehood programmers believe about date/time.
It's fine for distributed logging and computer-only usage, but fails in obscure ways once humans, time zones, travel, laws, and/or daylight saving time get involved.
If you're scheduling events for humans, and can't immediately list the reasons your app is an exception to the above, store the time zone to be safe. You probably don't have big data, and nobody will notice the minuscule overhead.
It does work quite well. Sometimes you need a time zone to go with it. It might not be common, but sometimes you need to know the local time in a particular zone, which is not necessarily where the user is. I work on software that works with local times in arbitrary time zones. We submit data in a schema over which we have no control, which must include such local times that may or may not be in the time zone of the server or the current client machine.
Epoch (a/k/a "unix timestamps") are OK when you just need an incrementing relative time. When you start converting them back and forth to real calendar dates, times, with time zones, DST, leap seconds, etc. the dragons start to emerge.
A lesson I learned pretty early on is always use the date-time datatypes and libraries your language or platform gives you. Think very carefully before you roll your own with integer timestamps.
At Facebook a full outage is accompanied by "first time?" Memes. Unless you are on the specific team responsible you would indeed not really have any reason to care
reply