LLMs can encourage delusions
Because LLMs are designed to be agreeable to the user’s input as possible this can end up enabling delusions. It takes user prompts and feeds it back to them in a different way, leading to a vicious feedback loops where the delusion does not end up challenged or refuted. AI-is-a-mimic-machine after all, it does not do any actual thinking.
This was particularly bad in chat GPT models before 5.0 but in my opinion may persist.
References
This is illustrated in Eddy Burbank’s video Chat GPT made me delusional where he follows everything the AI suggests.
Notes mentioning this note
There are no notes linking to this note.