Rich Freeman via plug on 9 Dec 2024 05:34:36 -0800 |
[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]
Re: [PLUG] ChatGPPT |
On Sun, Dec 8, 2024 at 11:51 PM Steve Litt via plug <plug@lists.phillylinux.org> wrote: > > Rich Freeman via plug said on Sun, 8 Dec 2024 08:08:26 -0500 > > > >I heard somebody make the analogy that asking an AI to do work is like > >hiring a college intern, but one that doesn't ever get better. > > My experience with ChatGPT over the past 18 months is that it's getting > more accurate very quickly. Well, it probably is. My analogy wasn't well expressed. It isn't so much that it is a college intern that doesn't get better. It is more like hiring a college intern, then in three months firing them and replacing them with a new college intern. Then you continue to do this. The new interns have benefitted from updated curricula, so they are more generally familiar with recent trends/etc. However, they remain completely inexperienced in the specifics of your business/etc, and will never develop this experience. Think about software development. I'd argue one of the main challenges in software development is defining your problem and understanding it. This is essentially a learning activity. You can start out knowing everything in the textbook, but nobody's actual real-world problem is described exactly in the textbook. Software development is about creating something that doesn't exist - even if you're just integrating an off-the-shelf solution you're still doing creative activities. Sure, an AI can go apply textbook knowledge when you ask it to. However, that's all it can do. Also, it isn't able to apply engineering principles and so on - it is using language (in the case of an LLM). When ChatGPT gets better, it is because they gave it a better textbook to study. It isn't because it has a better appreciation of the specifics of your own problem. It is just hunting through textbooks to try to synthesize a wall of text that resembles what a textbook might say about your problem. For example, I asked ChatGPT: "Create a manifest for a k8s job to run fio to benchmark the performance of a PVC." It created this: https://pastebin.com/fwGS7KHj The boilerplate aspects of this look fine (I didn't actually test it, but it seems right). That's because you can scrape boilerplate out of a doc page. However, the logic has some serious flaws. It creates an ephemeral volume, outputs the result to this volume, and then terminates, which would instantly delete the volume/results. So it produces no effective output (just dumping it to the console is a better option, because that is retrievable post-termination). It also references an image that doesn't even exist - basically a hallucination, though it does reference a github project name so it LOOKS like it might be an image. (Of course to have an image name without anything else it needs to be on docker hub, not github). Basically it looks like what an intern might do if they had to fake it until they make it... -- Rich ___________________________________________________________________________ Philadelphia Linux Users Group -- http://www.phillylinux.org Announcements - http://lists.phillylinux.org/mailman/listinfo/plug-announce General Discussion -- http://lists.phillylinux.org/mailman/listinfo/plug