> OpenAI acceded to demands that the US Government can do whatever it wants that is legal. Anthropic wanted to impose its own morals into the use of its products.
What if Anthropic's morals are "we won't sell someone a product for something that it's not realistically capable of doing with a high degree of success? The government can't do what something if it's literally impossible (e.g. "safe" backdoors in encryption), but it's legal for them to attempt even when failure is predetermined. We don't know that's what's going on here, but you haven't provided any evidence that's sufficient to differentiate between those scenarios, so it's fairly misleading to phrase it as fact rather than conjecture.
My point is that they have far more knowledge about what the product is capable of and where its limitations lie than the government. A company expressing doubt that their product can be used safely for a given task even knowing the risk to their ability to make a sale for that exact purpose is far more trustworthy than potential buyer who claims they understand but also refuse to agree not to use it for that. I know this isn't a universally popular opinion, but I wish more companies acted responsible by not trying to maximize profits at the expense of social good.
I don't understand any interpretation of this whole saga that claims that Anthropic was acting selfishly here. I could at least understand (but would vehemently disagree with) a claim that it's bad for them not to be trying to sell something that they genuinely did not think was safe for the task it was being purchased for, but the idea that they're somehow "imposing" morals on the others is nonsensical to me. If anything, I'd expect that trying to sell a complex software system for a purpose it's unfit for might even receive scrutiny for potential fraud in a more healthy regulatory environment.
The relevant (unanswered?) question for this thread is who's operating and managing that deployment, and to what extent provider (or subcontracted FDEs) is involved in integrations. I would be surprised to learn of deployment actually being independently operated. Sure the machinery can be considered a product but associated service- and support engagements are at least as relevant to take into account.
What if Anthropic's morals are "we won't sell someone a product for something that it's not realistically capable of doing with a high degree of success? The government can't do what something if it's literally impossible (e.g. "safe" backdoors in encryption), but it's legal for them to attempt even when failure is predetermined. We don't know that's what's going on here, but you haven't provided any evidence that's sufficient to differentiate between those scenarios, so it's fairly misleading to phrase it as fact rather than conjecture.