GPT Myth Busting
Is the new generation of Artificial Intelligence intelligent?

1 minute read
Chat GPT has been making headlines recently and even your technophobe friends and family members have probably been speaking about it. It's inevitable that debates about the benefits and risks of advancements in artificial intelligence (AI) will be headline news soon as OpenAI's Chat GPT and similar AI systems get released to the public. Should we trust them? Are they safe? And here in lies the first irony... trust.
Trust
The concerns come from a lack of trust in both the systems themselves and the regulatory frameworks that they are developed and operated within. We have probably based too much trust in the abilities of this new generation of AI system, which has given rise to these concerns.
The phenomenon known as Automation Bias, studied in social psychology, suggests that humans tend to over trust decisions made by automated systems. On closer inspection, many responses from the LLM (Large Language Model) based AI systems resulted in the generation of inaccurate and fictional information. In fact the term "Hallucinated Facts" seems to be an emerging phrase to describe this intrinsic behaviour. It is acknowledged that AI can make mistakes, but it's the nature of the mistakes that's unusual and sometimes concerning, often flying under the radar of human reviewers.
LLMs
Large Language Models are not designed or optimised to prioritise factual accuracy. Instead, they are trained to "blur" the information from the web into a more abstract representation, which allows them to excel in creative and artistic generation. The ability of an LLM to generate any true statements is merely coincidental, as those statements appeared frequently enough in the training data and became retained, rather than being a result of explicit design. And here is the second irony...
A Tool for Experts
Due to the convincing but erroneous behaviour of this technology, these AI systems are best in the hands of experts, but they are available to everyone. Anyone can use them and use their outputs. Maybe the biggest risk at the moment is the human element...'who' does 'what' with their output.
Hallucinations can be useful
It is important to acknowledge that not every use case of LLMs requires accuracy. Some of the most remarkable instances of LLM and multi-modal LLM outputs involve the integration of creativity and "hallucination," resulting in significant artistic value. These outputs offer users novel combinations of styles that they might not have envisioned, adding to their overall appeal.
So what's next?
Trustworthy LLMs need to be paired up with other information and natural language processing (NLP) techniques... which of course is already well underway. We think that trustworthy AI systems will emerge from the synergy of LLMs and Knowledge Graphs. Combining the creative power of LLMs with the factual index of Knowledge Graphs will give rise to systems that inspire creativity and provide reliability for knowledge workflows and critical applications. The inclusion of Knowledge Graph technology may arguably make AI systems 'provable' where it is advantageous to do so.
And finally...
Let's not lose sight of the incredible achievements that the teams behind these emerging technologies have made in terms of understanding us users. The ability of ChatGPT in particular (at the time of writing) to not only remember the details of your conversation and understand our messages so well, is arguably the main breakthrough. The ability to understand our intent, to grasp and retain the evolving context of a discussion and to respond appropriately, is an amazing achievement.
Photo by Mohamed Nohassi on Unsplash

