When ChatGPT was introduced to us humans a few months ago, the instinct of many was to put it to the test. Some asked for recipes, some for letters of introduction, some asked it to summarize long and boring texts, and some had their homework done. The things it can do are truly innumerable, partly because its knowledge (not wisdom, and we will understand this later) is based on the hundreds of millions of web pages and information that have been fed to it. What has surprised many is that with ChatGPT you can interact and talk, very much like you would with a human being.
However this is Runlovers and, as you may have guessed from the title, we are talking about the person who thought that since he knows everything and everything can be asked of him, it would be a great idea to use him as a coach and get a good training plan. How did it go?
What is ChatGPT?
What are we talking about? If you haven’t been to Mars in the past few months you have surely heard of ChatGPT, a free Artificial Intelligence tool developed by OpenAI. However, if you know what we are talking about, you can skip to the next section :)
What is it for? Let’s start with how you interact with it: like a chat room (hence the name) in which you converse, only in this case your interlocutor is a machine that, however, interacts like a human being. Sometimes absolutely identically to a human. A comparison may help to understand this better: those you normally interact with when asking about a flight or your phone subscription are bots, that is, machines trained with a standard set of answers to common questions. ChatGPT, on the other hand, is endowed with intelligence that enables it to educate itself by reading on the Web and to reprocess and organize information in much the same way as a human brain. Returning them on demand just as if there were two humans conversing (one of them, that is you, is actually a human).
Is ChatGPT like Google? No: Google responds to a request by providing a series of more or less relevant links, and what you get is a list of things, whereas ChatGPT organizes responses with incredible accuracy. An example? If you ask Google for a recipe for risotto alla milanese, what you will get is a series of links to blogs that tell you how to make it, but most likely what you will find there is a list of articles often introduced by long stories about how their grandmother used to make it and only at the end will you find the recipe. How does ChatGPT respond to you instead? Relevantly: if you want to know how to make risotto alla milanese, chances are you don’t care about Carla’s grandmother, and in fact it gets straight to the point: ingredients and preparation. That’s it.
Knowing that you can ask it anything, it has occurred to many to ask even the strangest things ever to ask a computer, like suggesting a workout.
Okay ChatGPT, let’s talk about it
When Rhiannon Williams learned that she had been accepted to the London Marathon, overcoming her initial anxiety and excitement, she felt that she needed a program to better prepare herself. The alternatives were to turn to a coach or do something seemingly risky and crazy: ask ChatGPT to prepare it for her.
The first attempt immediately showed her the limitations of this tool, namely the inaccuracies that are sometimes contained in its answers. In fact, an initial question suggested that the only “long” before the race to be a 10-mile run-a bit short to be considered a very long run. So she decided to give it another chance, asking for “a 16-week training plan.” The answer? The long run this time turned out to be 19 miles to be run the day before, which is a few less than the marathon itself and especially close to the race. It was easy to predict that if she had followed that advice, she would have arrived at the starting line exhausted.
Her experience stopped here and continued with a human coach. But Williams was not the only one to use ChatGPT in the sports world.
There are tiktokers who have had workouts prepared by it, and there are also those who for $15 interview it for you and return the result. But how – you’ll think – does he get paid for a job he doesn’t do? That’s right, and there’s a reason you may have guessed by now: the quality of the answer depends not only on ChatGPT but also-and very much-on how it is questioned, that is, on the precision with which the question it is asked is phrased. A new professional figure has even emerged: that of the prompter, i.e., he or she who is able to get the most out of generative AI systems through the accuracy of the requests submitted.
Its limitations (for now)
In short: aside from sounding (very) human, what ChatGPT says is more interesting than how it says it. What is it like in the end?
It is generic and boring but generally (quite) correct.
It does not establish a human relationship so it can only guess (as much as an algorithm can) what kind of person is in front of it but cannot generate an experience, let alone the experience that best suits individuals.
For example, there are those who demand routine and predictability from their training, those who like to be surprised instead, those who want to suffer in order to feel that it is working, and those who just want to have fun with some light movement.
In short, ChatGPT provides standard information. If, for example, you question it on any topic, its answers are quite accurate (sometimes, especially if you know the topic well, you realize that it makes mistakes but still its creators warn about the potential inaccuracy of the answers) but they are often boringly boring.
At least at the present stage of evolution, it is a geek who can do its job well but is unlikely to impress you with originality. After all, its type of artificial intelligence is called “generative,” meaning that it is capable of producing new content but still based on information it has inferred by grinding millions of Web pages.
Another aspect to be considered-confirmed by its creators-is that ChatGPT is not a bot that searches for information for you and presents it to you in an almost human-like form, but it continuously reprocesses it. What does it mean? It means that the same question could be answered in slightly different ways, unless you ask it when America was discovered and how much is 2+2, of course. In addition, and still at its current stage of evolution, it does not rework existing chats, and this is also a reason why it may happen to provide slightly different answers to the same questions. Let’s say it has a prodigious long-term memory but a lousy short-term memory :)
AI also tests us
The most common uses of systems like ChatGPT are countless, and I refer you to the countless articles written in recent months. What is more interesting to observe is another aspect: when talking about it, we should not neglect us, and that is that its performance is in close relation to those who question it: the more and better we know how to interact with the AI-especially by putting it at our service-the better the results we will get from it.
In short, let us not ask what AI can do for us nor what we can do for it but rather let us ask what we can do to make it work properly for us.
PS: Needless to specify that ChatGPT did not write this article, right? You know, these days you don’t know what’s out there anymore….