Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Chain-of-thought is crippled by structured outputs

I don't know if this is true. Libraries such as Pydantic AI and I would assume the model provider SDKs stream different events. If COT is needed then a <think> section would be emitted and then later the structured response would occur when the model begins its final response.

Structured outputs can be quite reliable if used correctly. For example, I designed an AST structure that allows me to reliably generate SQL. The model has tools to inspect data-points, view their value distributions (quartiles, medians, etc). Then once I get the AST structure back I can perform semantic validation easily (just walk the tree like a compiler). Once semantic validation passes (or forces a re-prompt with the error), I can just walk the tree again to generate SQL. This helps me reliably generate SQL where I know it won't fail during execution, and have a lot of control over what data-points are used together, and ensuring valid values are used for them.

I think the trick is just generating the right schema to model your problem, and understanding the depth of an answer that might come back.



Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: