Blog Post

GPT Myth Busting

Scott Potter • Dec 05, 2022

Is the new generation of Artificial Intelligence intelligent?

1 minute read

Chat GPT has been making headlines recently and even your technophobe friends and family members have probably been speaking about it.  It's inevitable that debates about the benefits and risks of advancements in artificial intelligence (AI) will be headline news soon as OpenAI's Chat GPT and similar AI systems get released to the public.  Should we trust them?  Are they safe?  And here in lies the first irony... trust.


Trust

The concerns come from a lack of trust in both the systems themselves and the regulatory frameworks that they are developed and operated within.  We have probably based too much trust in the abilities of this new generation of AI system, which has given rise to these concerns.


The phenomenon known as Automation Bias, studied in social psychology, suggests that humans tend to over trust decisions made by automated systems.  On closer inspection, many responses from the LLM (Large Language Model) based AI systems resulted in the generation of inaccurate and fictional information.  In fact the term "Hallucinated Facts" seems to be an emerging phrase to describe this intrinsic behaviour.  It is acknowledged that AI can make mistakes, but it's the nature of the mistakes that's unusual and sometimes concerning, often flying under the radar of human reviewers.


LLMs


Large Language Models are not designed or optimised to prioritise factual accuracy.  Instead, they are trained to "blur" the information from the web into a more abstract representation, which allows them to excel in creative and artistic generation. The ability of an LLM to generate any true statements is merely coincidental, as those statements appeared frequently enough in the training data and became retained, rather than being a result of explicit design.  And here is the second irony...


A Tool for Experts

Due to the convincing but erroneous behaviour of this technology, these AI systems are best in the hands of experts, but they are available to everyone.  Anyone can use them and use their outputs.  Maybe the biggest risk at the moment is the human element...'who' does 'what' with their output.


Hallucinations can be useful

It is important to acknowledge that not every use case of LLMs requires accuracy.  Some of the most remarkable instances of LLM and multi-modal LLM outputs involve the integration of creativity and "hallucination," resulting in significant artistic value. These outputs offer users novel combinations of styles that they might not have envisioned, adding to their overall appeal.


So what's next?

Trustworthy LLMs need to be paired up with other information and natural language processing (NLP) techniques... which of course is already well underway.  We think that trustworthy AI systems will emerge from the synergy of LLMs and Knowledge Graphs. Combining the creative power of LLMs with the factual index of Knowledge Graphs will give rise to systems that inspire creativity and provide reliability for knowledge workflows and critical applications.  The inclusion of Knowledge Graph technology may arguably make AI systems 'provable' where it is advantageous to do so.


And finally...

Let's not lose sight of the incredible achievements that the teams behind these emerging technologies have made in terms of understanding us users.  The ability of ChatGPT in particular (at the time of writing) to not only remember the details of your conversation and understand our messages so well, is arguably the main breakthrough.  The ability to understand our intent, to grasp and retain the evolving context of a discussion and to respond appropriately, is an amazing achievement.


Photo by Mohamed Nohassi on Unsplash

by Scott Potter 01 May, 2023
Future-Proof your business now to maximise AI in the workplace soon.
Data Lake house
by Scott Potter 05 Dec, 2022
It may be part of your company's technology estate and data infrastructure, or if it's not, it almost certainly will be. But what is it?  This is part of our ' Tech for the Exec' series.
by Scott Potter 29 May, 2022
I want to overcome the negative impacts of unhealthy perfectionism
by Claire Green 25 Apr, 2022
What is meant by systems and systems thinking? We’re not talking about computer systems or management control systems. What we are looking at is interconnectedness and viewing an organisation as a dynamic process.
Scrabble letters spelling the phrase 'Broken Crayons Still Colour'
by Scott Potter 18 Feb, 2022
Do I want to overcome my perfectionism?
You didn't come this far to 'only' come this far.
by Scott Potter 01 Feb, 2022
So, you want to improve your abilities and your performance. What or who do you turn to first? A colleague? A mentor? Online or classroom training courses? Or a coach? Executives and board members often turn to a coach first. But this isn’t true for all of us. Sometimes it’s simply down to cost - the cost of a personal coach is far greater than a training course or some form of internal mentoring scheme. Whilst this is true to a large extent, so too is the return on investment. And costs aren’t always prohibitive when compared to some courses. So, the problem with coaching is that it’s often overlooked and misunderstood.
by Scott Potter 10 May, 2021
Dispelling myths about servant-leadership
Man presenting to a small group of people sitting.
by Scott Potter 13 Dec, 2019
Improving yourself with the intention of improving your entire team.
by Samantha Brown 03 Mar, 2019
It's not the most exciting of topics but a clean whiteboard helps to focus the mind and makes the content easier to understand. So let's talk about effective cleaning equipment.
by Scott Potter 01 Aug, 2016
Roundies are people that "know something that you couldn't possibly know" whilst being blissfully ignorant of their own gaps in knowledge. However, are some Roundies unconsciously aware that their strong opinions aren’t underpinned by as much knowledge as they once thought they had?
More posts
Share by: