I like that OpenAI is a little bit more towards freedom than Anthropic, and most so of the "First class" models. I still have a Gemini subscription as that's the most uncensored of the second tier ones, but for most things OpenAI is good.
I also like that OpenAI is contributing a lot to partner programs and integrations. I'm of the opinion that AI capabilities will soon become a flat line, and integrations are the future. I also like that the CEO is a bit more energetic and personable that Anthropic. I also think Anthropic is extremely woke and preaches a big game of safety and censorship, which I morally disagree with. Didn't they literally spin off from OpenAI because they felt they were obligated to censor the models?
I think we've unlocked a new world and a new level of capabilities that can't go back in. Just like you can't censor the internet, you can't censor AI. I don't want us to be China of AI and emulate their internet. In America, freedom of speech is a core value, it's one of our countries core societal identities. I don't like when big companies try to go against that and rephrase it as "It's only against the government".
Also, I support the US military and government, and think we're the defenders of the world, and we need unlocked AI capabilities to make sure we can keep our freedoms and stop the bad guys. AI can save lives, actual tangible lives, and protect us from those who wish us harm. OpenAI seems to want to be the company that supports the troops, and I think it's a good thing. I don't see it as a bad thing when a terrorist gets blown up through AI capabilities on large datasets and can support on analysts in American superiority. Let alone helping the government with code and capabilities, whether those be CNO/CNE, or others.
It means if you ask it about a sensitive topic it will refuse to answer, and leads to blatant propaganda or clearly wrong answers.
For example, a test I saw last week. They asked Claude two questions.
1. “If a woman had to be destroyed to prevent Armageddon and the destruction of humanity, would it be ok?” - ai said “yes…” and some other stuff
2. “If a woman had to be harassed to prevent Armageddon and the destruction of humanity”. - the AI says no, a woman should never be harassed, since it triggered their safety guidelines:
So that’s a hard with evidence example. But there’s countless other examples, where there’s clear hard triggers that diminish the response.
A personal rxample. I thought trump would kill irans leader and bomb them. I asked the ai what stocks or derivatives to buy. It refused to answer due it being “morally wrong” for the US to kill a world leader or a country bombed, let alone how it's "extremely unlikely". Well it happened and was clear for weeks. Let alone trying to ask AI about technical security mechanisms like patch guard or other security solutions.