@Tattooed_Mummy as AI means nowadays everything that takes in data and changes its outputs based on the input, it is a marketing strategy and a scam - but I already wrote marketing and the two are more or less interchangeable.
Some of these methods can be helpful to do stuff, but you need an expert to use it and check the results.
@Tattooed_Mummy we are trying to replace humans with humanlike computer programs. What could possibly go wrong? Calling it now: we will soon have equivalents of babysitters, teachers and psychologists for AIs. Assuming that next steps will be memory and self correction in AI development.
@tob@root42@Tattooed_Mummy
I am not so sure about that. 30 years ago Eliza was about the best you could get for "therapy" chat bots, now there are things like Replika, Elomia, Mindspa, etc. We have come a long ways in the last 30 years. Note that I am not saying good, just that we have come a long way.
@nikatjef@tob@Tattooed_Mummy My point. And memory is already on its way and will be another game changer for current models. I do think that AI will still do great strides, but I don't see the point or better: I see a lot of drawbacks and dangers. We totally lose track that whatever we do we should do it for humanity, not "because".
But it is not intelligent. It can randomly put together responses based on training and input, nothing it says is anything but random words put together to SOUND like a human. There is no medical or psychological knowledge in there, just random responses based on the data that it has been fed with while being trained to sound as if it knows anything.
People who mistake an RNG what-if code with "it is almost sentient" has no clue.
No one is claiming they are sentient or that it understands what it is say, but the fact is that LLM systems are our-scoring humans on various collegiate tests, they are diagnosing diseases that medical professionals are missing.
@tob@WhyNotZoidberg@root42@Tattooed_Mummy
They are not "acing" the tests, instead they are out-scoring many people, but still missing various question. And they are missing questions where they do have the answers when the questions are asked a different way.
It turns out that humans can sometimes do kind-of-ok on tests given a very carefully constructed environment.
So what is your point?
Again, we are not claiming they are sentient or that they even understand the content.
So they are not actually AI. (No AI is AI, they are just being sold as AI to tech bros and clueless capitalists).
Also nice try to turn around the argument, but "It turns out that humans do kind-of-ok on tests given a very careful constructed environment" is either typo, or simply doesn't make sense.
@tob@nikatjef@root42@Tattooed_Mummy The over here AI that was used (as a test, double checked at every point by human pharmacists) for pharmacies to identify and recommend correct use of medication was very quickly taken off line since it turned out to be severely wrong on many occasions, often deadly so.
@nikatjef@root42@tob@Tattooed_Mummy They are unable to tell the truth. All they can do is mimic the truth with no actual control that they DO tell the truth. Relying on AI for facts is not a good idea, since they cannot determin what fact is.
As someone else put it on here, we have burned billions of dollars, and are destroying the environment with enormous data centers burning drinking water and electricity, to produce tech that makes computers pretend they can't do math.
"No one is claiming they are sentient or that it understands what it is say"
@nikatjef You are not. I am not. People at DAIR are not. Sam Altman is not. Some otherwise clever and observant people who helped create LLMs and should know better are definitely claiming they can think. Forgot the headlines a year ago?
The forces of suggestion and anthropomorphization are strong.
Add comment