Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Apart from the fact it's focused on logical fallacies, this is reminiscent of AWS Bedrock Automated Reasoning, which also appears to involve some kind of LLM-guided translation of natural language into logical rules ... which are then used to validate the output of the LLM application

https://aws.amazon.com/blogs/aws/prevent-factual-errors-from...



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: