Rich Freeman via plug on 8 Dec 2024 16:28:47 -0800 |
[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]
Re: [PLUG] ChatGPPT |
Ok, I promise I won't keep beating this dead horse... On Sun, Dec 8, 2024 at 4:34 PM Lynn Bradshaw via plug <plug@lists.phillylinux.org> wrote: > > "ChatGPT blithely gave me an answer that was totally wrong. And when I challenged it, it agreed, but continued spouting nonsense." > > That goes to the heart of what I've read about others and why > they don't like LLMs much: BTW, I consider myself among them. I don't consider LLMs super-useful. > the seeming total inability to reason > about obviously (sometimes provably) false answers given a few > moments ago. Even very capable AIs that exist today are kind of like > intelligent aliens who come to this planet but are forced to see the > outside world like on the inside of a pinhole camera, very restricted > in their inputs, You keep selling me on the idea that LLMs are very human-like. This is how people act. Maybe not the people on this list, or the people you tend to associate with, who are representative of about 0.1% of the population. However, this is how people in general act. > and with no agency about ever changing said inputs. As I mentioned before, I do concede this point. This is where LLMs as they are designed today diverge from all but the most basic animals. Just about anything with a brain will adapt in realtime. I suspect if you had the resources you could get an LLM to do this, but of course it wouldn't be practical cost-wise. The hardware used to train these things is many orders of magnitude more expensive than the hardware used to run the models. I suspect that if you put enough money into it you could probably get one to learn - probably not all that well, but perhaps as well as somebody of below-average intelligence. > As for ... other aspects, I guess that's an age-old question that > goes back to at least the introduction of the word "robot" into the > English language from the play "Rossum's Universal Robots". The > "robots" in that play were actually made of flesh and blood but were > seen as stunted and diminished—until they started to think for > themselves. R.U.R. and Frankenstein even before that helped set the > standard for the trope of "cybernetic revolt". This is actually along the lines of my argument here. When it comes down to it, intelligence isn't actually all that easy to define. We want to do so in a way that separates humans from other animals, but all sorts of animals do all sorts of really clever things. Arguably something that sets us most apart is our use of language as a way to map the conceptual models in our brains to the models in the brains of those around us. The idea of turning words into vectors of concepts sounds like something right out of Plato. Sure, it is just one aspect of "intelligence" divorced from all the others, but it is pretty remarkable all the same, which is why it has gained so much hype. I'd argue some of the biggest problems with LLMs is that they're terrible at logic and math. That's also very a very human-like foible. In any case, I'm not at all suggesting that LLMs should be running many business operations. No doubt they can be useful for certain types of data processing/pre-filtering/etc. When it comes down to decision-making, though, it amounts to taking the very worst aspects of human nature and making your process dependent on it. I've been spending the last few decades on automation solutions that ensure that humans are constrained in their ability to manually make certain types of decisions. It is the human-like aspects of LLMs that make them most unsuitable for most decision-making. LLMs are pretty humanlike. That's the problem. -- Rich ___________________________________________________________________________ Philadelphia Linux Users Group -- http://www.phillylinux.org Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce General Discussion -- http://lists.phillylinux.org/mailman/listinfo/plug