Hacker Newsnew | past | comments | ask | show | jobs | submit | bg24's commentslogin

I think it depends on the company. In large companies, the role of PM probably won’t change that much. However, PMs who are technical and hands-on can bring significantly more value by leveraging AI tools.

There’s another path for PMs that the article and most of the comments don’t seem to mention.

Technical PMs are now in a great position to start their own companies. In the past, many were blocked or handicapped by the inability to code. With AI-assisted development, that barrier is much lower, which gives them a lot more leverage to build products themselves.


Lot of skepticism about OpenAI's survival. I am a user of both Claude (max) and Codex (plus). Some neutral points.

- Anthropic owes to AWS for their enterprise growth. Yes, their own talent as well.

- AWS investing for a purpose - solving problems with multi-agent systems - "exclusive third-party cloud distribution provider for OpenAI Frontier, which enables organizations to build, deploy, and manage teams of AI agents.". I think the multi-agent landscape will be production-ready in 2026 for solving really complex problems. AWS saw something in Codex and OpenAI's models.

- On Circular investments - if you make $100B of your revenue from ecosystem of players who spend $50B on your infra... where else would you go?

I work for another cloud provider, not AWS.


If Codex 6.0 is better than Opus 4.9, things will flip. While OpenAI has too many common enemies and trying to box them into a consumer company, they are equally enterprise focused. They need to absolutely do well with foundation model - everything else depends on that.

Well, codex is better than opus right now. I have both subscriptions, and use claude for grunt work + codex for reviews. Codex is comparable at code writing but does much better with tools, skills and ad hoc investigations, say, lauching emacs and inspecting internal emacs state on the go.

Same, I also have both subscriptions (100 Max and 200 Pro), and I am considering canceling MAX plan but would give it another month on watch.

The doomer sentiment is quite baffling to me, what trouble is OpenAI in? Definitely not after GPT 5.3. They have the model and they have the compute, people just don't realize it yet.

Might be I am in a twitter bubble, most people seem already team Codex


Same here. So I have to resort to speaking elsewhere (notes app) and copying/pasting.

Would be nice to see an OCI runtime and if it can give high-performant I/O as opposed to other we have today (eg. Gvisor).


I am on a max subscription for Claude, and hate the fact that OpenAI have not figured out that $20 => $200 is a big jump. Good luck to them. In terms of model, just last night, Codex 5.2 solved a problem for me which other models were going round and round. Almost same instructions. That said, I still plan to be on $100 Claude (overall value across many tasks, ability to create docs, co-work), and may bump up OpenAI subscription to the next tier should they decide to introduce one. Not going to $200 even with 5.3, unless my company pays for it.


I'm coding about 6-9h per day with Codex CLI on the $20 Plus sub, occasionally switching to extra-high reasoning and feeding it massive contexts, all tools enabled, sometimes 2-3 terminal sessions running in parallel and I've never hit limits... I operate on small-ish codebases but even so I try to work in the most local scope possible with AGENTS.md at the sub-directory levels.

Are you really hitting limits, or are you turned off by the fact you think you will?


You are correct :-) I am turned off by the fact that I will hit the limit if I used more. But you gave me confidence. I guess $20 can go a long way. I think only once in the last 3 months I got rate limited in Codex.


You should look into Kilo Pass by Kilo Code (https://kilo.ai/features/kilo-pass). It's basically a fixed subscription and your credits roll over each month, and you get free extra credits too which are used up first before paid credits. It's similar to paying for Cursor except the credits roll over which is why I'm contemplating moving to it, because I don't want to be locked into any one LLM provider the way Claude Code or Codex make you become.


I was wondering how KiloCode Kilo Pass pricing compared to OpenRouter's top-up pricing, and did some digging and discovered the main difference is that OpenRouter provides a standard API key (sk-or-...) that works in any application (LangChain, curl, your own Python apps), while Kilo Pass credits are tied to the Kilo Gateway, which is designed to power the KiloCode Extension (VS Code/JetBrains) and CLI. KiloCode does not appear to allow you generate a "Kilo API Key" to use in your external Python scripts or third-party apps. But the monthly bonus credits are sweet.


Yes, it's for development not deployment.


I guess the jump is on purpose. You can buy Codex credits and also use codex via the API (manual switching required).


I use Codex in OpenCode through the API and find the experience quite enjoyable.


Need to try OpenCode. Thanks.


Yes. Also you can have these replicas of Postgres across regions.


With a little bit of experience, I realized that it makes sense even for agent to run commands/scripts for deterministic tasks. For example, to find a particular app out of a list of N (can be 100) with a complex filtering crietria, best option is to run a shell command to get specific output.

Like this, you can divide a job to be done into blocks of reasoning and deterministic tasks. The later are scripts/commands. The whole package is called skills.


There are one-off things, and then there is exponential improvements - both in guardrails, and ChatGPT's ability to handle these discussions.

This type of discussion might be very much possible in ChatGPT in 6-24 months.


"I don't trust LLMs to do the kind of precise deterministic work" => I think LLM is not doing the precise arithmetic. It is the agent with lots of knowledge (skills) and tools. Precise deterministic work is done by tools (deterministic code). Skills brings domain knowledge and how to sequence a task. Agent executes it. LLM predicts the next token.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: