gimulnautti, One of the most common #machinelearning #hallucinations is #recommendation engines relentlessly pushing you political content from influencers you wouldn’t touch with a long stick.
And it’s not just a one-off, it’s constant. When the training set does not contain the inference needed, the system hallucinates it.
With recommendations, the only training parameter is engagement. With generic #LLM’s the parameters are found by the system independently, but the same problem generalises across.