As we’ve seen in the news lately, ChatGPT has been making a lot of headlines with people saying things such as “AI” about it. I haven’t used ChatGPT or any of the other “AI” systems that have now been released(e.g. Google bard, Bing AI, etc), so these are just some musings on these systems and as to what we call AI.
Now I’m not an expert on AI at all, but it really seems that the systems that exist today are not AI. I can’t find the video, but I’m pretty sure that Tom Scott said something along the lines of “ChatGPT is an advanced mimic system.” I would tend to agree with this assessment, since the things that AI generates can be good on the surface, but it seems that they then go off into an insane direction. For your amusement, I would present Gotham Chess’ videos on ChatGPT vs Stockfish, and ChatGPT vs Google Bard.
Let’s think about this for a minute. Clearly, ChatGPT and Google Bard are incapable of playing chess with legal moves. Google’s DeepMind AI however has managed to learn and beat chess before, so what’s the difference here? It seems to me that what ChatGPT and Google Bard have in common is that they are good at knowing what text will come next, but they are absolutely terrible at learning and/or knowing what is true.
This part of learning and/or knowing what is true is in my mind what says that these systems are not AI systems. To me, it seems that an AI system must be able to learn new things, and use that information in order to become smarter. Since these systems don’t seem to be doing that(or simply apologize way too much if you try to correct them), it feels like they are just trying to make you happy and don’t really “know” anything.
It seems like we have several different ways of doing this AI stuff at the moment: we have ChatGPT and Bard that are very good at text generation, and then we have something like DeepMind that can learn chess after just a few hours. Since DeepMind has the learning capability and ChatGPT/Bard has the capability to generate valid English sentences, it seems like a system that could somehow combine both of these would be much more of an AI system than what we have today.
Perhaps this is an updated version of the Turing Test: if AI exists, then logically a teacher should be able to tell it something and have it learn from that information. Maybe send an AI terminal to a school that can listen to a teacher, have speakers that can speak, and if it can pass first grade then it is an actual AI? The idea being that if you can only teach the AI and it only has access to the information that it learned in school, and then it can use that information in the future, then we can prove that it is intelligent.
Leave a Reply