(FTB stderr) – Marcus Ranum:
AI are going to have (for now) a problem with being right about things, in fields where there is an objective criterion for right or wrong. An AI cannot bloviate about the nature of dark energy or the volume of a rotational curve: there are verifiable answers. But I hypothesize that fields lacking objective criteria are going to be very easy for AIs to fudge responses into: marketing, psychology, philosophy, MilSF, screenplays, presidential speech-writing, on and on. A friend of mine and I have been playing with asking ChatGPT to write code for us, and have been enumerating the critical mistakes it makes. Perhaps ChatGPT can design a website that looks interesting, but when I asked it to write a C function that added a struct Node to the end of a linked list, it appears to have assumed I meant a struct Node * and accessed its member fields via pointers (instead of allocating memory and initializing a copy) C has objective criteria for good code and bad code, but speechwriting and MilSF do not. …
AI can be used as a tool for elevating mediocrities and, as such, it’s not just a threat to the mediocre, it’s a source of infinite noise. …
stderr in-the-crosshairs![ELIZA conversation](https://jfnet.wordpress.com/wp-content/uploads/2022/12/eliza-conversation.gif?w=450)