>>215940Well, it takes quite some time to explain it, there are youtube videos that can do it way better than I ever could.
In a nutshell, LLMs manipulate language statistically, so after word X they think that it is statistically likely that word Y should follow. However, the right answer to a riddle might actually be Z, but the LLM doesn't know that, beacause it's stupid.
That's why you get hallucinations: the AI writes stuff that sounds right, but the answer is actually wrong.
Of course, this is a simplification, and everything is way more complex than that. In fact they do have some kind of internal logic, because since they use a lot of VRAM ("big hardware"), they internally have so many if-else statements (they're not really if-else statements, but I'm simplifying) that some kind of logic and reasoning actually emerges during training. However, since the internal logic is being created automatically by training, there is no way to check these conditions.
That's why they are called "black boxes": you cannot debug them.
Also they reason in a very non-human way, so even if you could debug them, everything would look like garbage to a human, it would be like reading obfuscated code.
Even the AIs that recognize a dog in a picture, have no concept of a dog, they are only "reasoning" about pixels positions, in an extremely abstract and non-human way.
Given these facts, you can only get LLMs to act like they are smart, but you cannot make them ACTUALLY smart.
If I train a human to do calculations basing on how the number sounds when spelled, the human might get some results, it might do 1 + 1, and 2 + 2, because the words "one" and "two" sound different. But at some point it will just fail and be stuck. And you cannot IMPROVE that, because the method is wrong: you should teach him what a number is, you should actually teach it logic, the proper way. And we don't have that right now, and probably never will (the hardware is not suitable).
Probably biocomputers might stand a chance, but those are not a thing of the immediate future.