We totally found this doing financial document analysis. It's so quick to do an LLM-based "put this document into this schema" proof-of-concept.
Then you run it on 100,000 real documents.
And so you find there actually are so, so many exceptions and special cases. And so begins the journey of constructing layers of heuristics and codified special cases needed to turn ~80% raw accuracy to something asymptotically close to 100%.
That's the moat. At least where high accuracy is the key requirement.
In case you haven't come across the idea yet, this concept is all the rage among the VC thoughtbois/gorls. Not sure if Jaya Gupta at Foundation coined or just popularized it but: context graph.
Could be a good fundraising environment for you if you find the zealots of this idea.