The freakonomics of ChatGPT, a modern Artificial Intelligence chatbot
You’d be forgiven if you were sick of hearing about ChatGPT and AI (Artificial Intelligence). The adoption rate has broken records. Over 100 million active users two months from launch, making it the fastest-growing consumer application in history. TikTok took nine months and Instagram 2.5 years to hit the 100 million benchmark. With word spreading that fast, it is almost impossible to miss.
ChatGPT is a crap name, though. I mean, you could drive a truck through the blank stares whenever I mentioned it to friends who are not tech-savvy. After a number of repeated efforts at remembering it, I get a hint that they don’t care. As a brand, it’s not easy to remember. It’s not catchy, slick, or hints at any significance. Yet if you consider the audience, the brand makes a lot of sense.
ChatGPT means, “Chat Generative Pre-trained Transformer”. In layman’s terms, it simply means a machine that can understand human language and respond in a conversational manner. It is so good that it has passed the Turing Test; a test introduced by Alan Turing in 1950, which measures a computer program's ability to mimic human communication in a five-minute text-based conversation with a human evaluator, who then judges if it's a human or a computer.
AI is also not new. It’s been around for more than 100 years.
What makes ChatGPT interesting is its answers to questions. ChatGPT uses a technology called Natural Language Processing (NLP). This means that a machine scans data like webpages, books, etc and figures out answers to questions then presents the answer in a human-like language.
The questions I asked varied from philosophical to instructive. At first, I tested it by giving the machine some metaphysical-type questions, “What do you know about philosophy?” “Explain ethics” “When you give answers to questions do you consider morals or ethics?” That kind of thing.
And then I argued with its answers. “This sounds like you have a moral opinion” “If you don’t have personal opinions on moral issues why mention your opinion about them?”
The results were interesting.
Basically, this machine is a clever monkey-see, monkey-do robot. It uses complex and predictive logic to analyse data fed into it.
“My primary goal is to provide accurate and reliable information and to assist users to the best of my abilities. In answering questions about moral issues, I provide information about what is generally considered to be right or wrong by many people and societies, based on the information that has been provided to me through my training data.” - ChatGPT
Unfortunately, we don’t know what this training data comprises and which may contain author bias. That is the personal bias of the software developers.
“it is important to note that the texts that were included in my training data may contain biases, and these biases could be reflected in my responses. It is also possible that the information in my training data may be incomplete or out of date, which could affect the accuracy or reliability of my responses.” - ChatGPT
Likewise, the data may be incomplete. In this case, ChatGPT may deduce an answer based on existing related data.
“If you have a specific question about a book or document, I may be able to use my understanding of the language and the concepts that it covers to provide more information or generate a response. However, it is also possible that I have not been trained on the book or document that you are interested in, in which case I may not be able to provide a helpful response.” - ChatGPT
It’s definitely a fascinating machine, and I encourage you to explore how it can help you improve your business or skill sets. It’s easier than you think. It’s like Siri but better. Just remember though, it is a machine and is only as good as its driver.
p.s. Manage Marketing predominantly uses human experience. However, we have adopted the use of AI technology and use it where appropriate.