Lynn Bradshaw via plug on 8 Dec 2024 05:16:55 -0800


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

Re: [PLUG] ChatGPPT


It isn't generally accepted that LLMs have anything like human-level intelligence or sentience. That isn't to say AGI could never happen but ChatGPT etc. aren't it.

On Sun, Dec 8, 2024 at 8:08 AM Rich Freeman via plug <plug@lists.phillylinux.org> wrote:
On Sun, Dec 8, 2024 at 7:01 AM Casey Bralla via plug
<plug@lists.phillylinux.org> wrote:
>
> So anyway, I'm not worried about the upcoming robot uprising since
> they still haven't found a way to put any "I" in "AI".  ChatGPT
> reminds me of very smart guy who's read lots of books, but has no clue
> how the world really works.  It's spouts stuff it reads, but has no
> actual understanding or experience about those subjects.
>

So, I'd argue that LLMs actually have human-like intelligence, with
the caveat that they cannot learn in realtime like a human can (their
mental model is baked in when it is built).

However, it does not have expert-level knowledge in all fields.

I'd argue that when you asked it about "greaseweazle" you probably got
a better answer than if you surveyed 10 random people at Walmart.
Those are humans.  They're just not the humans you would go to in
order to answer a question like this.

I heard somebody make the analogy that asking an AI to do work is like
hiring a college intern, but one that doesn't ever get better.  You
don't hire a college intern for their knowledge/experience.  You hire
them for their potential.  Actually that's even true of those who are
experienced since just about ANY skilled job requires the worker to
grow as they work through a project.  If you started a new project in
an area where you have a lot of experience, you'd probably still
encounter situations/requirements/etc that would cause your thinking
to evolve a bit, so that the solutions you create are better adapted
to the problem.  Hence the state of the art is always improving.

The current crop of AIs definitely won't be replacing you anytime in
the near future.  That doesn't mean that they aren't as smart as
people.  It just means that it takes more than "just a person" to
replace you.  In fact, these sorts of flaws only make them seem more
human-like to me.

And that's setting aside the fact that even if its mental model
actually was as good as yours, the way they currently work they're
still incapable of growth in realtime.

That's my sense of it at least.  Perhaps somebody who has delved more
deeply into LLMs might disagree.

--
Rich
___________________________________________________________________________
Philadelphia Linux Users Group         --        http://www.phillylinux.org
Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce
General Discussion  --   http://lists.phillylinux.org/mailman/listinfo/plug
___________________________________________________________________________
Philadelphia Linux Users Group         --        http://www.phillylinux.org
Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce
General Discussion  --   http://lists.phillylinux.org/mailman/listinfo/plug