kellogh, i used an analogy yesterday, that #LLMs are basically system 1 (from Thinking Fast and Slow), and system 2 doesn’t exist but we can kinda fake it by forcing the LLM to have an internal dialog.
my understanding is that system 1 was more tuned to pattern matching and “gut reactions”, while system 2 is more analytical
i think it probably works pretty well, but curious what others think
Add comment