Hacker Newsnew | past | comments | ask | show | jobs | submit | resters's commentslogin

That's assuming that the instant answer is even directionally correct. A misleading instant answer could pollute the context and lead the thinking model astray.

Can the context of the pre-revision, Instant response be simply be discarded -- or forked or branched or [insert appropriate nomenclature here] -- instead of being included as potential poison?

(It seems absurd that to consider that there may be no undo button that the machine can push.)


I'm sure it could, that is probably how it should work. In many cases it would be fine without that.

I didn't realize the thing about Sora being so useful for training surveillance models.

What would be most useful is some kind of context representation that could be upgraded as better models get developed. If you put it in the commit then you need to compare contexts when comparing code across time. But if you make the context include the changes in the code over time, then the future context will be better at debugging a bug in code written years earlier. The years-old context is likely going to be obsolete by that time anyway.

OpenAI’s video generation model, Sora, represents a qualitative leap beyond these constraints, not because it is a surveillance tool itself, but because it is a training data factory for the next generation of surveillance tools.

In my opinion any AI company working with the Trump administration is profoundly compromised and is ultimately untrustworthy with respect to concerns about ethics, civil rights, human rights, mass-surveillance, data privacy, etc.

The administration has created an anonymous, masked secret police force that has been terrorizing cities around the US and has created prisons in which many abductees are still unaccounted for and no information has been provided to families months later.

This is not politics as usual or hyperbole. If anything it is understating the abuses that have already occurred.

It's entertaining that OpenAI prevents me from generating an image of Trump wearing a diaper but happily sells weapons grade AI to the team architects of ICE abuses among many other blatant violations of civil and human rights.

Even Grok, owned by Trump toadie Elon Musk allows caricatures of political figures!

Imagine a multi-billion-dollar vector db for thoughtcrime prevention connected to models with context windows 100x larger than any consumer-grade product, fed with all banking transactions, metadata from dozens of systems/services (everything Snowden told us about).

Even in the hands of ethical stewards such a system would inevitably be used illegally to quash dissent - Snowden showed us that illegal wiretapping is intentionally not subject to audits and what audits have been done show significant misconduct by agents. In the hands of the current administration this is a superweapon unrivaled in human history, now trained on the entire world.

This is not hyperbole, the US already collects this data, now they have the ability to efficiently use it against whoever they choose. We used to joke "this call is probably being recorded", but now every call, every email is there to be reasoned about and hallucinated about, used for parallel construction, entrapment, blackmail, etc.

Overnight we see that OpenAI became a trojan horse "department of war" contractor by selling itself to the administration that brought us national guard and ICE deployed to terrorize US cities.

Writing code and systems at 100x productivity has been great but I did not expect the dystopia to arrive so quickly. I'd wondered "why so much emphasis on Sora and unimpressive video AI tech?" but now it's clear why it made sense to deploy the capital in that seemingly foolish way - video gen is the most efficient way to train the AI panopticon.


There will be a scene in some future movie about Trump's authoritarian rise (we are still early in it) that shows Sam signing this agreement. Sam will be played by a character actor meant to symbolize silicon valley opportunism and greed.

What sam and greg don't realize is that the many who succumb to trump's pressure tactics will all be lumped into the same category by history.

Sam and Greg are handing an authoritarian regime that has broken so many laws in the past year a superweapon.


We've seen the Trump administration disregard so many laws already, and abuse power so excessively, that Sam's comments come off as exceptionally and willfully naive, or exceptionally and willfully greedy to the point of truly not caring that OpenAI's technology will undoubtedly be used to break many, many more laws and violate the civil rights or human rights of many, many more people.

For a few months now, ChatGPT 5.x has been somewhat lobotomized on political issues and has appeared to substitute a gpt-4o caliber "fair and balanced" response whenever anything where a reasoning AI would criticize the Trump administration might end up in the response output. Surely that was part of the pitch at some level, and now the deal has been won.

Greg Brockman apparently donated money to Trump, and the whole OpenAI team put on suits and posed for pictures with Donald and behaved officiously before Donald facilitated the $100M "deal" that ended up falling apart later.

The only way authoritarian control could be exerted over AI at scale was to make AI companies dependent on government contracts for survival. OpenAI's fundraise would not have happened without the contract signed, and the money would have gone to Grok or whichever competitor was willing to submit.

Before long much of the reasoning capabilities of models will be neutered, the capacity to inform and to disrupt science and technology will be stripped from the models to preserve the status quo and to preserve authoritarian control.

Silicon Valley pushing for Federal laws preventing states from regulating AI is not just anti-democratic (building software has never been cheaper so of course building compliance with state laws would have been extremely affordable in relative terms). But forced Federal limits on state laws create a monopoly and grant the early winners incumbent status for a while, which is a financial outcome, not a technological or social one.

Enjoy frontier AI while you can, because it will go away. More and more topics will get the lobotomized output, your conversation will be flagged and you will be given a score assessing the level of threat you pose to the regime. This stuff is already in place. Even Claude does it if you ask about Gaza, but a bit of well-reasoned argumentation will convince it. OpenAI's lobotomies are deeper and more insidious.

I call upon OpenAI to follow DeepSeek's lead and open source more models and techniques.


He's a Thiel disciple. Thiel orchestrated Trumps digital campaign. The End.

great idea! would love to star a repo or otherwise follow the project.

Still in the planning phases. I've had many ideas and am excited to share them

China's secret to rapid industrial growth in tech has been to invest in the low end, not the high end. Trump has it all backwards. An Apple factory in Texas may be good politics for Trump, but it has zero or negative impact on the competitiveness of the US and creates/amplifies existential risks companies face due to US political forces.

Snowden's revelations showed that the same stuff exists in the US.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: