@meowski @lain @anderseknert An LLM cannot be polite. It does not have semantic understanding or subjective experience.
It is a generation tool emulating context-probabilistic text.
AI reviews are effectively impossible to rely on because they are similarly probabilistic. (Some languages with sufficient proofs can probably mitigate it enough, but people don't tend to enjoy those much.)
The code quality is not the point. The program's speed/performance/whatever is not the point.
User agency & freedom is the point. Which also means that someone reading the source should be able to rely on it to pick up helpful cognitive patterns (as well as coding habits, skills & so on).