The author of the medium article specifically hobbled the models to stop them thinking it through and got a wrong answer but that would happen with humans too and doesn't prove much.
I would argue that most humans would either give the correct answer or just say "I don't know". Some of them might confidently give the wrong answer, but humans will readily refuse to follow instructions in plenty of circumstances where they decide they aren't worthwhile. LLMs don't do this, and I'd argue that the ability to reject premises is fundamental to engaging with things in a truly logical way.