Hal Swerissen

Let me tell you about the wild and crazy ChatGPT. This AI behemoth was bred by the mad scientists over at OpenAI and it’s the closest you’ll get to talking to a supernatural being. With a vast database of information, ChatGPT can wax poetic about anything and everything. Whether you’re looking for the answer to life’s great questions or just some good old-fashioned tomfoolery, ChatGPT is your ticket to a wild and trippy adventure through the mind of a machine. So grab a drink, sit back, and strap in for a wild ride with ChatGPT, the most far-out AI on the market.

Full disclosure – as is now all the rage, that hyperbolic introduction was written in about five seconds by ChatGPT itself in the gonzo style of Hunter S Thomson of ‘Fear and Loathing on the Campaign Trail’ fame after I asked it to. Setting aside the hype, Chatbot’s description of itself is pretty much accurate. Check it out for yourself. The rest of this article is by ‘me’ and a bit less out there.

Artificial Intelligence is a hot topic because it’s about to move from niche to general use. So far it’s been mostly niche applications for industrial processes, software development, design assistance, driverless cars, logistics scheduling, health care diagnostics and so on. It’s what drives the back end of major online services like google and Facebook to optimise personalised content and advertising for users. This is why you get lots of ads about backpacks if you start Googling walking tours of New Zealand.

AI is booming because the underlying modelling and maths have been adapted and improved. Many of the statistical and modelling techniques used in AI applications were developed in other fields. For example, the iterative regression and decision tree models we used to make predictions about the population spread of COVID and whether we needed lockdowns are similar to statistical techniques used to develop chatbots.

Nor is the hype new. Chatbots have been around for 60 years, but they were dumb as a post until about 10 years ago when new models to understand and generate natural language were developed and paired with neural network programs. Before that chatbots like Eliza, the original chatbot, mimicked conversations with simple preprogrammed responses. We lost interest when they couldn’t do much and the novelty wore off.

Today’s emerging digital assistants are very different and much smarter. They use statistical models to analyse and generate speech and to calculate the probability that the responses they produce are plausible. The bigger the data set, the closer the statistical match and the more plausible the answer.The AI ‘learns’ by progressively building up and refining a network of layered classifications that are used to analyse input and generate responses. These models produce sophisticated, plausible answers in everyday language, and they do it fast.

But plausibility is not truth. If the best match to the data is plausible, but false, that’s what you will get – a fast, plausible, false answer! There are risks in relying on statistical plausibility alone. Not only that, if the data the AI uses is biased, derogatory, or dangerous you will get biased, derogatory and dangerous plausible answers.

As a spectacular example, when Google hurried to introduce its own AI assistant ‘Bard’ in response to ChatGPT it managed to get the answer to a basic question wrong at its launch. It suggested the James Webb telescope had produced the first pictures of a planet outside the solar system when in fact it was the European Southern Observatory. Bard produced a plausible but wrong answer and Google forgot to check. Investors read this as a sign that Alphabet, Google’s parent, wasn’t up to the task and unsurprisingly the share price fell by billions of dollars.

The other irritating problem with plausible statistical matching is that when you put in a vague question or comment or the AI matching probability to the available data is poor, you get back plausible vague responses, often at great length. Not exactly ‘bullshit’, but not helpful either. Probably best thought of as ‘chatsplaining’.

There are very real ethical and privacy concerns too. Would you want your digital assistant making autonomous decisions about whether or not to tell the school when and where the self driving car will pick up your six year old this afternoon? These technical and ethical problems are not trivial.

But AI designers are alive to these issues. It is in no one’s best interest to have false, dangerous and nasty digital assistants. More sophisticated designers add rules and parameters to their programs to counteract these problems. For the moment they also use human evaluation and fine tuning to improve accuracy and ‘truth’.

Solving these problems comes at a cost. Analysing very large data sets to generate plausible and validated answers that meet rules of civility and privacy at an acceptable level of risk takes enormous processing power and considerable human tweaking. At the moment, for example, ChatGPT only runs on historical data. It can’t tell you what is going on in real time and it gives you plenty of warning that it’s still a work in progress. Moving to real time will be expensive and the business model isn’t clear yet.

Not surprisingly anxiety, skepticism, hype and hope about the general application of digital assistance are everywhere, from Universities worried about cheating, to overly critical commentators rubbishing the the errors and sometimes vacuous responses that Chatbots can produce. On the other side are the overly optimistic investors driving up AI companies share prices. For the moment the hype will be hard to halt.

Google has already had a panicked response to ChatGPT, with the introduction of its own AI assistant ‘Bard’. But rushing didn’t go well. Entertainingly, Chinese tech giant Baidu entered the fray with ‘Ernie Bot’. In case you’re wondering Ernie is for Enhanced Representation through Knowledge Integration. The GPT in Chat GPT is for Generative Pretrained Transformer. There are a range of others.

But AI has real potential to be yet another digital revolution. Opening up AI to everyone means we can all have digital personal assistants to help us find information, answer specific questions, design personalised experiences, run the house and car and manage our daily lives. ChatGPT is the first widely available personalised (draft) assistant that can be used interactively to ‘wax lyrical’ about everything and anything. It’s already out there for anyone to try and soon a version will be built into search engines and personal assistants.

Nor is generative artificial intelligence restricted only to conversations, text searches and modelling and logistics. The image that goes with this article was produced artificially by ChatGPT’s cousin, DALLE – 2.

The hype will have to be managed. A staged approach is probable. One that starts by enhancing existing, relatively safe and straightforward processes like shopping, internet searches, diary scheduling, managing emails and so on. The focus will probably be on helping users to simplify and speed up these processes try adding a smart front end that you can either talk to or text to get it to do specific tasks. One that remembers what you did in the past and learns your preferences. The stuff you expect from personal assistants.

After all if you’re trying to put a meeting together with six people in your team, who would not want digital assistance to check everyone’s diaries and preferences and come back with options and suggestions. Similarly, having digital assistance to sort out a set of flights, airport transfers and accommodation sounds good to me. Or, if you’re doing research on a topic, being able to ask for answers to specific questions, source information and to critique your understanding of what you’ve learnt would be fabulous, provided you’re smart enough to evaluate the responses.

Moving from intelligent assistance with particular tasks to a more general intelligent personal assistant that can help you across a range of tasks is a bigger leap.

But it’s not hard to imagine a world where your assistant sorts out your morning alarm, manages work logistics, provides support on professional and technical tasks, books and orders for you at restaurants, arranges for a car to pick you up after dinner, turns on the heating in the apartment as you’re driving home and helps you to sort out what streaming programs you want watch when you get there. How hard that will be to do and how far we allow digital assistants to go remains to be seen.

The realistic hope is that AI can be carefully scaled up, sped up and validated in real time to make sure intelligent digital assistance is useful, polite and not too risky for everyday use for specific everyday tasks. The next decade will be interesting. But autonomous decision making about six year olds by your personal digital assistant is probably still a fair way off yet.