Hacker Newsnew | past | comments | ask | show | jobs | submit | preommr's commentslogin

The popular answer I've seen over the past few weeks is to just blame everything on the US, but that kind of thinking and lack of agency is exactly why countries like the UK are in the position they are.

Just constant burying heads in the sand, and believing in models where the prior assumptions are from a bygone era.


Not only do those particular prior assumptions date to 1957 in a way that makes deviating from them structurally dangerous, they involve very low military spending in a way that makes deviating from them politically dangerous.

Fixing the budget hole to pay for that spending without resorting to giving many people living in Monaco the Eichmann treatment as a side effect (which is untenable on account of French security guarantees to Monaco) would need some kind of government of hardcore believers who could also do math.


Why would one not blame the US for this situation?

I think the OP is saying that it's ok to blame, but it is not ok to just blame. It is preferable to also act in some sort.

For decades the world has very reasonably operated on the assumption that the world’s primary superpower wouldn’t be dumb enough to do this without a proper plan.

Everyone except the Iranians and maybe the Israelis were flat-footed by this, and the things that can be done about it are largely on the years/decades scale.


The way i read the parent post, is that the uk decided to invest in other things than military. As a result they are at the whims of USA foreign policy and can't really do much about it. In an alternate world where UK spent more on hard power, they might not be subject to the whims of america to the same extent.

[To be clear, i dont 100% agree with this argument. I think there is a little truth to it but also things are much more complicated than that and it ignores the geopolitical tension in the region that was going to explode one way or another even without usa]


Because a competent country/government should plan ahead for shortages of any vital resource it depends on.

Within reason.

Until quite recently, “the US sticks its dick in the chainsaw” wasn’t a “within reason” scenario.


> Within reason

Reason dictates having redundancy in place. Having prepared scenarios for what to do. A lot of countries clearly don't have that and they are operating on the assumption that no major disruption is going to happen.

Depending on resources coming from historically unstable locations and not having plans to prepare for such instability is just foolish.


> Reason dictates having redundancy in place.

The UK doesn’t have a strategic oxygen reserve in case the atmosphere disappears.

It’s both implausible and not really something they can do much about.

Trump is that sort of scenario.


The middle east explodes is an eventuality that is within reason to prepare for. Its famously a geopolitical powder-keg.

To be fair this is not the first time a US President stuck their dock in something and got bitten

USA is proximally to blame, but Iran in its current borders is an entity that is largely the brain child of Britain. It ended up encompassing the Baloch and Kurds, who could have helped check Persian power and make Persian borders more penetrable, which was probably a geopolitical mistake.

the Baloch and the Kurds are ethnic Persians, they speak Persian languages.

>the Baloch and the Kurds are ethnic Persians,

Wut

> they speak Persian languages

As a lingua franca, sure. By some very twisted semantics maybe, but really, no.


The only state preventing free commerce through the strait is Iran.

The only reason Iran is playing that sole card they hold is their two core enemies launched a war of aggression.

The US is running a blockade of their own in the strait.

That only applies to Iranian traffic. It would in fact be an act of war for the U.S. to blockade maritime traffic of countries it's not already at war with.

So, where does that leave Cuba?

Cuba is sanctioned, not blockaded.

  Cuban tankers have hardly left the island’s shores for months. Oil-rich allies have halted shipments or declined to come to the rescue. The U.S. military has seized ships that have supported Cuba. And in recent days, vessels roaming the Caribbean Sea in search of fuel for Cuba have come up empty or been intercepted by the U.S. authorities.

  Last week, a tanker linked to Cuba burned fuel for five days to get to the port in Curaçao but then left without cargo, according to ship-tracking data. Three days later, the U.S. Coast Guard intercepted a tanker full of Colombian fuel oil en route to Cuba that had gotten within 70 miles of the island, the data showed.

  While President Trump has pledged to halt any oil headed to Cuba, the Trump administration has stopped short of calling its policy a blockade.

  But it is functioning as one.
Sure, economic sanctions have been in place for a long time, but the US has started seizing full ships.

[0] https://www.nytimes.com/2026/02/20/world/americas/cuba-oil-b...



So just a limited duration war and breaking of international law. Barely counts for those impacted.

An act of war is not necessarily a war. They happen all the time.

Yes, as retaliation of a US/Israel invasion that is against international law.

Which in turn is also against international law (international law would let them retaliate against israel & usa. It doesn't let them target neutral shipping [edit: to clarify i mean neutral shipping going to neutral ports]).

Of course international law is not worth the paper its written on.


No, defensive blockades are explicitly permitted under international law, including neutral parties.

https://en.wikipedia.org/wiki/Blockade

> Blockades restrict the trading rights of neutrals, who must submit for inspection for contraband, which the blockading power may define narrowly or broadly, sometimes including food and medicine.


To clarify, i meant shipping to neutral ports (article 99 of San Remo: "A blockade must not bar access to the ports and coasts of neutral States" https://ihl-databases.icrc.org/en/ihl-treaties/san-remo-manu... ). Oman seems neutral in all this but nonetheless affected.

They would be allowed to blockade neutral ships going to enemy ports (e.g. Israel) subject to a bunch of rules but that doesn't seem to be what they are doing.

I dont even think Iran is claiming this is a blockade. They are claiming its part of its territorial waters, and they are claiming that they dont recognize the UNCLOS which would give vessels transit rights (but at the same time they are claiming they recognize the part of UNCLOS that allows claiming 12 miles out as territorial waters). At least that is what i got from https://www.ejiltalk.org/the-legality-of-irans-closure-of-th...


There are no neutral ports there. Every other country past the strait is a US ally with US military bases hosted on their territory.

Oman is before the strait begins.


> There are no neutral ports there. Every other country past the strait is a US ally with US military bases hosted on their territory.

I dont think hosting a US base would necessarily make them non neutral unless that base was used offensively. According to international law, Iran would also have to justify their exercise of self-defense rights was porportional, which even if you accept hosting a us base made that state non neutral, i think it would be difficult to justify their response against states simply hosting a us base met the porportionality requirements of int law.

However even if they were enemy states, Iran would have to declare all of these countries as being under blockade, which they haven't as far as i am aware.

> Oman is before the strait begins.

How is the Oman port of Khasab before the strait begins?


As a late thought - i would also add that the idea that this is an aggressive war against Iran relies on the idea that iran and hamas are separate enough that israel striking iran is not self-defense from when hamas struck Israel. But the relationship between Hamas and Iran is a lot closer than usa and the gulf countries. So i think its difficult to say that Iran striking gulf countries that host USA bases but are otherwise uninvolved is legitament self-defense without also saying that Israel striking iran was legitament self-defense from the proxies they fund. I feel like logically you cant have it both ways. Either both are valid self defense or neither are.

Did you miss the part about contraband? You quoted it, after all.

Firing on neutral shipping is not the same as intercepting it and inspecting it for war materiel or other contraband. Preventing shipping from reaching or leaving Kuwaiti ports is not the same as inspecting it for war materiel or other contraband.


Iran has been requiring shipping to submit to inspection and tolls via an adjusted route through the strait. And they can certainly deem oil contraband if they are allowed to do food and medicine, as quoted.

Ships that don’t stop get fired upon. That’s what happens in a blockade.

Kuwait is a US ally and hosts American military bases. Stopping shipping to there is very clearly legitimate.


> And they can certainly deem oil contraband if they are allowed to do food and medicine, as quoted.

Wikipedia is defining what the term blockade means not what constitutes a legal blockade.

Medicine is not allowed to be blockaded. Food is not allowed to be blockaded if there is a shortage.

Relavent parts of the san remo manual:

> 102 The declaration or establishment of a blockade is prohibited if:

> (a) it has the sole purpose of starving the civilian population or denying it other objects essential for its survival; or

> (b) the damage to the civilian population is, or may be expected to be, excessive in relation to the concrete and direct military advantage anticipated from the blockade.

> 103. If the civilian population of the blockaded territory is inadequately provided with food and other objects essential for its survival, the blockading party must provide for free passage of such foodstuffs and other essential supplies, subject to...

> 104. The blockading belligerent shall allow the passage of medical supplies for the civilian population or for the wounded and sick members of armed forces,...

https://ihl-databases.icrc.org/en/ihl-treaties/san-remo-manu...

You're correct that blocking oil would be allowed (barring situations where the civilian population needs it to survive, which doesnt apply here) if this was a legal blockade. However Iran is not complying with the other rules around blockades, which is the issue.


I'm very concerned about people downvoting the observation that this war is illegal and unnecessary even to achieve its stated goals.

I guess nobody likes hearing that their country is unethically invading other countries. As much as I hate defending Iran, I don't think there's much of a difference between what the US is doing to Iran and what Russia is doing to Ukraine.

You could look to what the ICC has charged Putin with to see some key differences.

The US is 100% without a doubt responsible for the shitshow we're in now.

We had a good thing going, and you fucked it up.


We in Europe negotiated the JCPOA. Read its terms. Understand its lead negotiatior was the EU representative. That was our codified relationship with Iran. Now compare that to the best case after what you geniuses started.

Own up to it. This is solely on the US. The rest of us had it handled until you came along.


You have Russian asset as president who acts like chaos monkey for the Putin's entertainment.

One thing is correct though that UK security services have not anticipated such outcome and politicians have not done anything about it.


The Soviet Union gave the equivalent of about 80-100 million USD in support to the ANC in South Africa in the 1970s and 1980s. As soon as they got power, the ANC turned around and aligned with the west, wasting every rouble Moscow spent.

Since the inauguration Trump has supported physical seizures of many different kinds of Russia-aligned merchant shipping and the economic degradation of Russia's allies. Given all of this, we can assume that the Russian asset angle is a much less accurate explanation for Trump's behavior than the alternative theory where he is highly suggestible to the most recent person to heavily compliment him in-person which used to be Putin and has subsequently changed to some mix of Rubio, Vance, Hegseth, Netanyahu and the Trump family.


Couldn't agree more. Our Navy is a joke. We can barely muster a single destroyer. Burning goodwill with Trump is foolish when europe is completely dependant on the US to defend them from Russia. I don't know what the strategic thinking is here. Demonstrate to the entire world that we are pathetic weaklings and the us that we're useless dependants?

This war might be dumb but it was also predictable. Why were these no contingencies? Or to quote Churchill "If you want peace you have to prepare for war".


> I can see that there are only 3 companies competing for the duopoly or monopoly realistically: OpenAI, Anthropic, and Google.

I could see people saying this in 2022, but now? No chance.

Chinese models keep demonstrating that SOTA can be approximated for a fraction of the cost. The innovation out of these companies keep showing diminishing returns, with a greater emphasis on the tooling and application layer. Having the right workflow with the right data is more important than having the right model. We could freeze AI now, and I'd bet good money that the current state of things is good enough to - not be first - but competitive for the next few years.

Even if we do end up with a oligopoly situaiton, it'll be less like Microsoft in the 90s and more like Microsoft now where they just give out windows for free, have support for WSL and the focus is on cloud services rather than their OS.


> Chinese models keep demonstrating that SOTA can be approximated for a fraction of the cost.

Wow, sounds like a threat to nation security. Those Silicon Valley companies shills start donating to select donation campaigns with hope to ban Chinese LLM models.

Can you imagine? Your children being exposed to the propaganda that these LLMs will be inevitably tainted to spew?


Turn it off and rage on social media.

If it gets bad enough, look into Zed. Their tagline is literally "your last next editor".


Zed currently does not have a revenue stream. Ot's only a matter of time before the same shenanigans ensue.

Like how GNU Emacs is completely saturated with AI now?

(That's sarcasm, in case anyone wants to pretend I'm being serious.)


Emacs is not VC-backed.

...yet.

..kidding. Obviously.


They're a commercial entity that sells AI plans and enterprise features.

Honestly not sure how viable that is long term with the way the pricing kinda needs to go. I think the recent copilot price increase is just the tip of the iceberg.

Zed is a nonstarter for me as long as they install additional software (third party runtimes to run LSPs) without asking my permission. That isn't acceptable behavior.

Unfortunately, Zed is years behind VSCode in terms of polish, Microsoft supported LSPs just work better in VSCode, they are better integrated, and Zed can't do anything about LSPs memory or peformance.

> Zed is years behind VSCode in terms of polish

One could think that. But VSCode is the one that occasionally failed to simply render text.

No idea what happened these handful of times, but the UI was just completely screwed up, as if it were one of these "scratch to reveal" games, but with the file’s content (and unresponsive, obviously).


I tried VSCode some years ago (immediately moved to Codium) and yes, it is extremely well-done for what it is. But Zed is good enough for me. Everything I care about for Python, TS/JS/CSS and C programming is available. I do not even miss the JetBrains tooling for these.

I'm rooting for Zed but it does feel quite underbaked still right now.

I really hope the editor wars don't start again. I've been happily using VsCode for years now. More than happy in fact, it's one of the best pieces of software I've ever used, as evidenced by how AI companies basically started as a VsCode fork.

But this is going full-throttle on enshittification.

WTF happened at microsoft (github, openai partnership, copilot pricing) that all this shit just ramped up to a 11?


The editor wars never ended, and VSCode has been user hostile since inception. It came with unavoidable telemetry right out the gate.

Yeah, this is part of the reason why vscodium exists.

vim and emacs are both still great choices.

I've been using *nix and usenet since the early 1990's.

I always thought "editor wars" was a particularly dumb in-joke among a small group and I feel sad when I see people who think it was ever more than that.

The Wikipedia page cites "The Jargon File" as an authoritative source of truth. Ridiculous.


> WTF happened at microsoft (github, openai partnership, copilot pricing) that all this shit just ramped up to a 11?

"Make a great free product so that we can enshittify it later" is an infamous MS playbook. Maybe nothing happened, maybe just the usual MS at work.


Wouldn't the opposite be true? That an llm would use well-known terms for general purpose writing. I think it's much more likely that a human would remember 'silent' launch, or 'stealth' launch, and use silent as a substitute.

I feel very strongly that comment wasn't AI generated.

Also, there's a bunch of normal comments that seem to be wrongfully flagged.


> Wouldn't the opposite be true? That an llm would use well-known terms for general purpose writing.

You'd think, and yet LLMs do in fact have a particular style, and lots of it is common across all LLMs.


I've had a great experience with cloudflare pages. It doesn't get much easier than using their cli (wrangler) to sync up a local folder. I suppose the exception is SSR, but then again I absolutely despise SSR so I don't think it counts.

Do they finally support native compiled languages like Vercel?

Webassembly doesn't count.


I hated pages until I started using the cli. I have started using it more for prototypes, etc. can’t beat it for the price.

To explain like two fundamental rules (we can make wrapper types, and do flatmap) I will:

- Write 5 paragraphs setting up an imaginary scenario involving fantasy elements of aliens, dragons, and a magical kindom where they speak using message boxes

- Introduce basic category theory by starting with what a functor is

- Explain all the effects of a monad in such general terms that it basically amounts to anything and everything - since a function can be anything and do everything and it's just function composition

- Write some snippets of Haskell, and just assume that you're familiar with the syntax

- Talk about how delicious burritos are


I've read more than my fair share of these tutorials, and I'd like to be proven wrong here but I don't think I've ever seen one that explains what the point of these functional constructs (similarly with Applicative etc.) is.

"You can do IO now." So what? I could do IO before that as well.

Very rarely are practical explanations discussed. Even if they are discussed, the treatment is shallow and useless.


You may appreciate my own contribution, https://www.jerf.org/iri/post/2958/ , which includes an entire section titled "If They're So Wonderful Why Aren't They In My Favorite Language?", a section explaining why IO is not a good lens to understand monads and why "monads" don't really have anything to do with "making IO possible" (very common misconception), as well as what I believe to be one of the more practical applications of monads as a way of generating an audit log of how a particular value came to be what it is without. That example specifically arose from one of the rare instances I used the monad pattern in my own real code. Though I still didn't abstract out the monad interface, because if you only have one, that does you no good. The entire point of an interface is to have multiple implementations. It just happens to be a data type that could have implemented the monad interface, if there had been any use for such a thing in my code, which there wasn't.

I enjoyed your article, thanks for sharing.

As I understand it, one thing the tutorial didn't go into, which I think is an important subtlety, is that it's not enough to have an implementation of "bind" to have a monad interface. You also need an implementation of "return : a -> m a" (i.e. a way of making sources of 'a's when given an 'a'), AND a proof that these implementations together satisfy the monad laws (i.e. that they "play nicely" together).

Without all three components, you can have something that "looks like" a monad, in that it has definitions for "bind" and "return", but isn't actually one, because those particular definitions don't also satisfy the monad laws.


Per the very last section, I chose to elide those. That's mostly because few languages worry about "laws" anyhow, and the lawfulness is less consequential in a non-lazy language because even if you nominally screw up the lawfulness, the code will still reliably do whatever it does. While I suspect we could find a pretty solid plurality of HN readers to be at least somewhat appalled at the idea, I think the generally programming world is not generally worried about it.

Plus it's rather like giving out criteria for how the frosting on the cake will be judged when most of the contestants are submitting piles of slightly dampened raw flour with an egg cracked over it and being offended when you won't agree that's a "cake".


Ah yep - I missed that mention of "lawfulness" in the last section. I guess the minor gripe I have is that that really isn't anything to do with Haskell: it's that you only have a monadic interface when the laws are satisfied (and Haskell itself doesn't, and can't, enforce the laws: you have to check them / others using your interface have to trust that you've checked them.)

I don't quite follow why you're making a distinction for non-lazy languages?

If you want to actually use any generic monad combinators with your monad interface, and expect it to behave sensibly, then the laws had better be satisfied!

But yeah... Nice article, and I really liked your "Noun / Adjective" distinction.


My understanding, which may be incorrect, is that the major reason that lawfulness matters in Haskell is the laziness makes it so that unlawful monads won't just do "something that violates the laws", an abstract, mathematical consideration that maybe you care about, maybe you don't, but that the combination of the laziness and the aggressively optimizing compiler means that result will be very unpredictable, and slight and seemingly isomorphic source changes can result in unpredictable results.

In Python, if I write an iterator on something pretending to be a list, and when it sees strings it doesn't just return an uppercased string but actually modifies the contents of the list to be uppercased, that's stupid, but at least since it's a strict language that isn't interleaved with IO and all the other stuff flying around in Haskell it will be consistently stupid. It isn't going to blow up or behave differently if I accidentally flip an "a + b" into a "b + a" somewhere.

It's bad, but Haskell has a whole different level of bad if you screw with it and don't play within the sandbox.

There is a definite "I'm being more pragmatic here than the average Haskell programmer" effect going on here. I... how to put this... "won't blink" is too strong, but... if I need to violate a law, if I need to write something like the stupid iterator above, I am in fact willing to. I have the decency to feel bad about it, and there will be extensive and probably bitingly sarcastic comments attached to it, but I'll do it. (Generally only when I don't control one end of the source code, though. If I have full control I never do anything that stupid.) But in Haskell it's a particularly bad idea, mostly because of the laziness and its interaction with other things.

And, heh, in a world where the struggle to explain what monads even are to people, monad combinators aren't even on my horizon.


> If They're So Wonderful Why Aren't They In My Favorite Language?

Aren't they now though? Like option is everywhere lately


Supporting "Option" is not "having monad". An Option data type can implement a Monad interface, but you can have an Option data type with no particular monad support in your language, or you can have an Option data type that implements something like "bind" or "join" but there's no interface that it conforms to.

If that sounds like gibberish it's because you don't have the right definitions loaded into your head. You can read the article I linked to fix that.

In this case note that what you are calling "Option" is called "Maybe" in Haskell and also in that article. There is an entire subsection explaining why using Maybe/Option as a lens to understand "monad" is a bad idea because by monad standards, it's degenerate, and degenerate instances of an interface make for bad examples. Just as if you're going to explain "iterator" to someone, starting out with "the iterator that returns nothing" isn't really a good idea, because it's not good to try to explain a concept with something that right out of the gate in some sense denies everything about that concept.

It's a common mistake. There's also some people who think that by adding flatmap to their list/array data type they've "implemented monads". No, they've just implemented flatmap on their list/array; they don't "support monads" by doing that. There are plenty of monad implementations that can't be understand as "flatmap", such as STM. ("flatmap" completely fails to capture the idea that a monad implementation may carry around additional data not visible from the level you're using the implementation on. That's one of the main reasons my example is structured the way it is in the article.) "flatmap" isn't "monad" in exactly the same way that "walk the next item in the array" isn't "iterator", or even more simply, "red" isn't the same as "color". Flatmap is an implementation of monad, walk the next item in the array is an implementation of iterator, red is an implementation of color.


Very few languages let you write a function that works for both Option and for other not particularly related monadic types (e.g. Future), while being fully typesafe, which is what I'd call "having monads".

Many languages have monads “by accident”, e.g JavaScript by way of Array.flatMap() - but the fact this type happen to satisfy the monad rules is not particularly useful.

I read this years ago and I think it's the best one I've read. Thanks for writing it!

Nobody will explain you like this, but the main point was being able to satisfy the compiler without introducing an escape hatch into the language.

Haskell is based on Miranda, and Miranda is based on Hope. Purely functional languages were really purely functional, academic experiments with no way to express side effects, so no way to express practical programs.

Philip Wadler took the monad (the name that already existed in category theory), and showed how computations could be expressed in Haskell with the “do notation” as an example. That made Haskell practical without breaking the “beauty” of the language, by having to introduce new special syntax or something outside the type checker capacity.

So, I don’t think there’s a motivation besides being an exercise in expressivity within the limitations of pure functional programming. Similar ideas in describing computation as lazy executed instructions already existed elsewhere, like the interpreter pattern.


> and showed how computations could be expressed in Haskell with the “do notation” as an example

To be clear, do notation is new special syntax that was added to make monads more ergonomic. Traditionally you used >> or >>=, which looks a lot more like closures.


The point is rather that in a pure language, each io operation needs to be dependent on a sort of "world state" which is updated for each operation. They chose to implement this state as the io monad but there could have been other ways.

I feel like the moment you understand what it is in Haskell you lose ability to explain it to people without heavy math theory background

But from what I observed its a group of fancy foreach loops that they put under same name for some reason



From the very beginning of the article (level 1), I don't see what's wrong with code that looks like the following. Early return seems to fix the "typing this makes me feel ill" part? To me, the following code seems perfectly readable without requiring the reader to know about function composition.

  def doFunctionsInSequence1(): Option[Set[Int]] = {
    val r1 = f1(null)
    if(r1.isEmpty) {
      return None
    }

    val r2 = f2(r1.get)
    if(r2.isEmpty) {
      return None
    }

    return f3(r2.get)
  }

I find that pretty repetitive, but more, having to reason about branching control flow adds a lot of mental overhead that I'd rather spend on my business logic.

Think of Monad (and really the whole typeclassopedia in a way) like Iterable in Java

what does it gives you? for loops

what do the haskell things give you? various types of for loops. monads in particular have `do` which is a language construct. but the rest are just higher order functions that are specific types of `for` loops

you learn a handful of these and then all programs are the same. you can whisk together complex control flow across domains with the same few abstractions. you can hop into a library, see these type class instances, and know how to use the library.

stuff like that.


From my experience having used Haskell (a long time ago), the main benefit of Monads is the `do` and <- syntax. Once you got your thing to satisfy the Monad interface, you unlocked the nice syntax for writing code. That, and compatibility with transformers.

Whether this is the best thing since sliced bread or not, is left as an exercise to the reader.


Hah, I like that: the main benefit of monads is turning your functional language back into an imperative one...

IMO it's because option is a monad, list is a monad, io is a monad, async is a monad, try-except is a monad, why invent different magic syntax and semantics for all of them when there's a perfectly good abstraction that covers the lot, and that lets you write functions that are agnostic to which particular monad they're in to boot.


> From my experience having used Haskell (a long time ago), the main benefit of Monads is the `do` and <- syntax. Once you got your thing to satisfy the Monad interface, you unlocked the nice syntax for writing code.

Nah, I don't even use the syntax much any more. The main benefit is the huge library ecosystem that works generically with any monad, so that if you want to e.g. traverse over a datastructure with your effectful action you can just use cataM or whatnot from recursion-schemes instead of writing it yourself, if you want to compose pipelines of them you just use Conduit, etc.


* Composition and reasoning. Standard things in FP. Build big things from little pieces. Understand them the same way.

* Explicitly define the order of evaluation (important in Haskell, where lazy evaluation makes the default order of evaluation difficult to trace)

* Useful mental model that helps with 1) design and 2) understanding new concepts

* Abstraction. Ignore irrelevant details. Write the standard library once, use it in many different situations.


Within most languages, you're operating at a semantic level where much of the "point" is already obviated for you. They deal with fundamental structure that you take completely for granted, and you use it all implicitly. A monad is very simple at the core of it, it's an ordered collection, flattened into a single context. What you're collecting, what that ordering means, what that context is, etc. define what the monad is used for.

You could do IO? IO requires temporal ordering. Take for instance:

    print("Hello ")
    print("World!\n")
Would obviously result in:

    Hello World!
But would it? You are implicitly assuming that the first line will be evaluated and print before the second. It's a reasonable assumption to make, most programming languages embed that in their execution semantics. What if I told you that the assumption isn't actually guaranteed? What if we didn't give that temporal ordering in the same way? What if for instance, a function could return a result without evaluating its arguments? This is called non-strict evaluation (note: this does not necessarily mean lazy evaluation). In the case of a non-strict language, you would need some way to tell the program that the first line should happen before the second before you can do any kind of IO. For a strict language, the IO monad doesn't make sense because you don't need to tell the program that.

Haskell is almost like a metalanguage. You're describing a program, but it's not like describing a program in Python or Scheme. You are expressing a program in graph reduction, and that's very different compared to how you're used to thinking of computer programs. That's the practical reason why Haskell has the IO and State monads, because they reify as a temporal grounding for instructions. Your program has a completely different concept of flow than in the real world, and these are tools you have to bridge that gap. It's important to note, this is just a very specific usecase of monads.

If you find treatment to be shallow, it's probably because you're looking for answers in shallow contexts. I used to be as confused as you, and the answer I eventually discovered is because I was ignorant of my own ignorance. I needed a healthy dose of computational philosophy to broach the subject. As someone else has said, once you understand it, it can be hard to explain it to someone who doesn't understand it. It's not a short topic to be learned in a series of twitter posts or a blog. It's something you come to understand after a lot of exposure and study and careful rumination. And of course, primary sources.


A joke says that its because once you get it, you lose the ability to explain it like a normal person :)

And another joke says the best way to explain a monad tutorial is to write another one, so sorry for this.

Just think of it as a box.

If amazon sent items themselves, it would be hard to pack, no way to standardize, things would break often or fall out of their respective boxes.

Now, if you put it into one of the standardized boxes, that makes things 100x easier. Now you can put these on a conveyor belt, now you can have robots sorting these, now you can use tape to close them, standardization becomes easy as it's not "t-shirt,tennis ball,drill" but just "box box box".

So now you can do all kinds of things because it's all a box. And you can also stress test the box.

It's the same with these.

A. You can just have a function that: calls a something on IO, maps it's values, does a calculation, retries if wrong, stores the result, spits it out.

Or B. you can have functions that calls any function on IO, functions that map any value to any other value, functions that take any other function and if that function fails calls another function or retries, one that stores any value given to it and returns with information if it saved or not etc.

The result is the same in the end, but while 1 makes the workflow be strictly defined only for that case, and now you have to handle every turn and twist manually (did the save save? what if not? write a check, write a test that ensures its not and the check works, same if it does...) the 2 lets you define workflows with pre-tested, pre-built blocks that work with any part of your codebase.

And it makes your life 1000x easier because now you have common components that work with any data type inside your codebase, do things your way always, are 100% tested and make it easier to handle good cases, bad cases, wiring and logistics. And you can build pipelines out of them. Because at the end, what it does is just lets you chain functions that return wrapped values.

And you end up with code like:

val profileData = asAsync { network.userData(userId) } //returns a Async<Result<UserData, Error>

.withRetries(3) // Works on Async, and returns Result, retries async if fails

.withTraceId(userId) //wrapped flatmap that wraps success into Trace<T> and adds a traceId

.mapTrace(onError = { ErrorMappingProfile }, { user -> Profile(user.name, user.profileId) } // our mapTrace is a flatMap for Trace objects, so it knows how to extract trace objects, call the functions and wrap them again

.store("profile_data") //wrapped mapCatching again for storage explicitly that works on Trace objects, knows how to unwrap them, stores them,

.logInto(ourLogger) // maps trace objects into shared logger

Each of these things would before have to be manually written inside the function, the whole function tested for each edge case. if/else's, try/catch, match/when/switch.

This way, only thing you need to cover with tests now is `network.userData()`, as all other parts are already tested, written and do what they say they do. And you can reuse this everywhere in your projects. Instead of being a function you call with data, it becomes a function you give a box and it returns a box. Then you can give it to any other function that needs a box. If boxes make no sense, think of the little connectors on lego bricks, or pipe connectors in plumbing, or stacking USB adapters or power strips.

I can't stress enough how much this approach helped me in real life cases - refactoring old codebases especially, as once you establish some base primitives, the surface area starts massively collapsing as the test surface area increases.


I think it's worth reading this if you want to understand the initial motivation for introducing Monads to Haskell: https://homepages.inf.ed.ac.uk/wadler/papers/marktoberdorf/b...

(And in the context of the previous paper, this one motivates Applicative well I think: https://www.staff.city.ac.uk/~ross/papers/Applicative.pdf)

That said, I've never really understood the enthusiasm the industry has for introducing Monads outside of Haskell. As I understand it, at the time Philip Wadler wrote his paper, Haskell was pretty painful to use due to its adherence to purity. Monads were presented as a way to maintain purity while providing a principled way to support all kinds of effectful computations. But without some of the features Haskell provides (I'm thinking of typeclasses and HKTs in particular), and given that almost any language you'll be introduced to outside of Haskell already has ways to do e.g. IO or whatnot, it almost always ends up feeling like bolting something on with not a lot of benefit.

Don't get me wrong, I think there's value in stuff like https://github.com/fantasyland/fantasy-land --I find organizing how I think about computations around these algebraic concepts helps me a lot, personally. But that's distinct from introducing these concepts into day-to-day work in a non-Haskell language, especially on a team, which is often more trouble than it's worth unless everyone has already bought into it and is willing to deal with the meaningful friction introducing this stuff produces.

I assume the overabundance of Monad tutorials and libraries has to do with the cachet of knowing this relatively obscure, intellectual thing and being able to explain it to your peers, or to be more charitable, perhaps it's a byproduct of getting excited about learning this new, distinct way to approach computation and wanting to share it with everyone. But the end result is that now we have tons of ridiculous tutorials and useless Monad libraries in tons of languages.


Haskell is primarily a bunch of type gymnastics designed to give the impression of "purity" when no such thing exists in the world.

> This seems to be at odds with the goal of token minimization. Lots of small files that are narrowly scoped means less has to be loaded into context when making a change, right?

my solution (as someone that's building something tangential) is to use granular levels of scope - there should be an implicit single file that gets generated from a package at a certain phase of the static tool processing. But the package is still split into files for flexibility and DevEx (developper experience). Files/Folder organization is super useful for humans. For tooling, the pacakge can be taken collected together, and taken as a single unit, but still decomposed based on things like namespace, and top-level definitions that define things like classes, specifications, etc. That way the tooling has control over how much context to pass in.


Yea, but RAG takes effort. At the very least some kind of system to organize the documents and do the retrieval.

My theory is that the AI frenzy has reached new levels of insane, where it's literally just throw anything and everything at the model, and just burn tokens to let the AI figure everything out. Why bother paying the upfront cost for a RAG, when the models/agents are constantly evolving, so just slap in a markdown file telling it to check a folder, and call it a day.

Like in design world, people are doing minor tweaks like changing the spacing by typing in prompts instead of just changing a number in an input field. We are legitimately approaching just using llms instead of calculators, or memes like that endpoint that calls an llm to generate the code to do some business logic, rather than directly code the logic.


> past month I’ve kept a journal where I put an “X” next to every date where a GitHub outage has negatively impacted my ability to work2. Almost every day has an X

Is it really this bad?

I've seen people complain about Github, but I thought it was more of a theoretical inconvenience rather than a real practical one. As in, the uptime for a serious software company should be 99.9, but two hours down just today, and constant outages over the month that they noticed... that seems way worse.



It's not always you see a status page so colorful...

Yeah, we use GH heavily at work (not so much GHA for critical workflows, thank god). They have an outage that breaks our git operations once a week at least. Like, webhooks not delivered, PRs not showing up, git operations not working, API issues… and that’s not counting GitHub actions which we only use for noncritical workflows

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: