Lynn Bradshaw via plug on 8 Dec 2024 13:34:49 -0800


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: [PLUG] ChatGPPT


"ChatGPT blithely gave me an  answer that was totally wrong.  And when I challenged it, it agreed, but  continued spouting nonsense."

That goes to the heart of what I've read about others and why they don't like LLMs much: the seeming total inability to reason about obviously (sometimes provably) false answers given a few moments ago. Even very capable AIs that exist today are kind of like intelligent aliens who come to this planet but are forced to see the outside world like on the inside of a pinhole camera, very restricted in their inputs, and with no agency about ever changing said inputs. They are forced to have to deal with whatever light of data shines through the pinhole, and are stuck there. That's part of why I talked about embodiment in cognition earlier. A would-be artificial general intelligence with a body wouldn't necessarily be forced to meet external demands to map heavily restricted inputs to equally heavily restricted expected outputs and would be able to learn in a more open-ended, autonomous fashion, resulting in what I would speculate would be a more legitimate machine intelligence.

As for ... other aspects, I guess that's an age-old question that goes back to at least the introduction of the word "robot" into the English language from the play "Rossum's Universal Robots". The "robots" in that play were actually made of flesh and blood but were seen as stunted and diminished—until they started to think for themselves. R.U.R. and Frankenstein even before that helped set the standard for the trope of "cybernetic revolt".

On Sun, Dec 8, 2024 at 9:24 AM Casey Bralla via plug <plug@lists.phillylinux.org> wrote:

On 12/8/24 8:08 AM, Rich Freeman via plug wrote:
> On Sun, Dec 8, 2024 at 7:01 AM Casey Bralla via plug
> <plug@lists.phillylinux.org> wrote:
>> So anyway, I'm not worried about the upcoming robot uprising since
>> they still haven't found a way to put any "I" in "AI".  ChatGPT
>> reminds me of very smart guy who's read lots of books, but has no clue
>> how the world really works.  It's spouts stuff it reads, but has no
>> actual understanding or experience about those subjects.
>>
> So, I'd argue that LLMs actually have human-like intelligence, with
> the caveat that they cannot learn in realtime like a human can (their
> mental model is baked in when it is built).
>
> However, it does not have expert-level knowledge in all fields.
>
> I'd argue that when you asked it about "greaseweazle" you probably got
> a better answer than if you surveyed 10 random people at Walmart.
> Those are humans.  They're just not the humans you would go to in
> order to answer a question like this.

That's a good point, Rich.  But I would never ask 10 random people.  And
if I did, they'd all say they don't know.  ChatGPT blithely gave me an
answer that was totally wrong.  And when I challenged it, it agreed, but
continued spouting nonsense.

It's like the old put-down joke: "It ain't what he don't know. It's what
he knows that ain't true."


[sigh].   I'm just pining for Asimov robots...


___________________________________________________________________________
Philadelphia Linux Users Group         --        http://www.phillylinux.org
Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce
General Discussion  --   http://lists.phillylinux.org/mailman/listinfo/plug
___________________________________________________________________________
Philadelphia Linux Users Group         --        http://www.phillylinux.org
Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce
General Discussion  --   http://lists.phillylinux.org/mailman/listinfo/plug