Comment ChatGPT says (Score 1) 160
The Slashdot article highlights real and important limitations, and it is right to warn against overtrusting raw model outputs
. However, it overgeneralizes by treating current brittleness as an absolute, thereby ignoring (a) reproducible algorithmic gains from CoT and related methods, (b) the demonstrated effectiveness of engineered mitigations (self-consistency, tool use, retrieval, formal verification), and (c) the clear trajectory of progress toward hybrid and verifiable systems that meaningfully reduce brittleness. A more balanced conclusion is: LLMs today are powerful, imperfect, and increasingly integrable into systems that verify and augment their outputs; research should focus on measurement, mitigation, and safe, verifiable deployment rather than on declaring reasoning capabilities a permanent mirage