Opinion: AI and the impact on our skills


We would be wise to practice caution and not over-rely on AI, says Prof Barry O’Sullivan, lest we compromise our ability to ensure our skills and intuitions are sufficiently strong.

In 2009 the tragic loss of Air France 447 shocked the world with the loss of 228 souls on board. The idea that such a disaster could strike on a transatlantic flight between two major cities seemed inconceivable. Just before the plane crashed into the Atlantic Ocean sensors caused the aircraft’s autopilot and auto-thrust to disengage, putting control of the aircraft back into the hands of the pilots when it was already in a complex emergency situation. The pilots believed the aircraft was in an overspeed state missing the fact that the plane was in a stall.

Responding incorrectly to the issue at hand proved fatal for everyone on board. The consensus within the aviation industry was that while automation in the cockpit was enhancing aircraft safety, there were growing risks that the manual flying skills of pilots were degrading, situational awareness was being negatively impacted, and that there was an over-reliance on automation in emergency situations.

A consequence of the disaster was a change in practice in relation to the use of automation and specific efforts were put in place to ensure that pilot skills were maintained even in the context of the growing use of automation in the cockpit.

While automation can have a deskilling effect, it is also important that human intuition can also be used to over-rule such systems. In September of 1983 the Soviet nuclear early warning system Oko reported that the US had launched a large-scale attack on the USSR. A few weeks earlier the Soviet Union had shot down Korean Airlines Flight 007 and there was considerable Cold War geopolitical tension.

Stanislav Petrov, the duty officer overseeing Oko, disobeyed his orders which were to report warnings from Oko to his superior officers so they could launch a retaliatory nuclear strike at the US. Petrov’s intuition was that Oko was malfunctioning, that the many sensor readings could not be correct, and he decided to do nothing and wait it out. He was right and the world was saved from almost certain nuclear destruction.

70 years of AI

While artificial intelligence has been studied for over 70 years – the term having been coined for the first time in a 1955 proposal for a summer study project to be held at Dartmouth in New Hampshire in 1956 – there have been major technological advances made over the past 15 years or so. AI technology is having a profound impact on the world. This has largely been for three reasons.

First since the early 2000s we have seen enormous growth in the availability of digital data due to the rise of the world-wide web, social media, and many waves and forms of digital transformation impacting the public and private sectors. Almost every aspect of human endeavour and knowledge is available for input into our computers in readily available machine-readable form.

Second, there has been a hardware revolution both in terms of our ability to store this large goldmine of digital data, but also in terms of new architectures such as cloud computing and processing units such as the graphical processing unit, or GPU.

Third, algorithmic developments in artificial intelligence, as well as the opportunity that data and hardware advances have presented, has given the field of artificial intelligence the opportunity to create new technologies that are of practical use and have had enormous impact on the world. Deep Learning, that subfield of machine learning which in turn is a subfield of AI, has virtually solved problems that underpin many human perception tasks.

The Gen AI revolution

More recently Generative AI – techniques that can generate realistic text, sound, video, etc. – have advanced at a surprising rate. In November 2022, ChatGPT was released to the world and it has dominated the public debate around AI almost to the extent that it has become synonymous with the term AI itself.

ChatGPT is an example of a Large-Language Model (LLM), a complex Deep Learning system that, when trained on vast datasets of text, acquires the ability to generate human-level text on any subject in an instant in response to seed text, or a prompt. LLMs not only generate extremely convincing text, but they can do so in stylistically sophisticated ways.

Chatbots built on LLM technology can enter into dialogues with humans that are uncannily human-like, even to the point that humans sometimes feel that these computer programmes are truly intelligent and even sentient. Of course, they are neither. They are systems that merely generate convincing text that satisfies the user’s needs.

LLMs lack many of the critical requirements for intelligence. These systems cannot reason logically in a reliable fashion. They don’t understand what they are “saying” as they generate text. Neither do they have “common sense”, an understanding of the world and how it works. Humans don’t learn in the way that LLMs are trained. LLMs can, as a result of how they are built, simply generate text that doesn’t correspond to something that is real. The term that is used for this behaviour is that LLMs “hallucinate”.

I like to compare LLMs with the Dan “The Man” Clancy character in the Irish TV show called “Killinascully”. His two friends in Jacksie’s Bar can ask him a question about anything and Dan will do his best to answer it, and if they want it sung in the style of Joe Dolan, Dan will give that a go as well. He doesn’t mean any harm, but sometimes Dan simply makes things up as part of his answers.

Generative AI, and AI more generally, is having a profound impact on the world. AI places in the hands of every person the power to generate plausible answers to complex queries that solve complex tasks. Entire business workflows are being transformed by plugging together software agents that have an AI technology inside to perform complex tasks. AI agents can interact with customers online, make recommendations, accept orders from customers, etc..

AI agents can be built using long-standing AI planning technology to determine the sequence of steps required to reach a goal. AI agents can deal with the logistics of ordering materials and parts. They can find suppliers and business partners. AI agents can optimise complex logistical problems to ensure that products are built and orders are fulfilled on time. The possibilities are endless.

Caution required

However, we would be wise to be careful lest we over-rely on AI technology, which might itself be lacking in robustness, and compromise our ability to ensure our skills and intuitions are strong enough to, not just understand how to perform tasks ourselves, but to have the sense of when things are not going as they should be. It is critically important that we practice skills to avoid losing them.

A recent study by the MIT Media Lab showed that using ChatGPT as an essay writing assistant could have profoundly negative cognitive consequences. In the study, three groups of students were asked to work on an essay writing task. One group was not allowed to use any tools to help with writing, a second group was allowed to use an internet search engine only, and a third group was allowed to use an LLM, superficially ChatGPT. As well as evaluating the essays and the knowledge of the authors, electroencephalography (EEG) was also used to measure cognitive load.

The results of the study were quite amazing. The ChatGPT group showed significantly weaker brain connectivity, especially alpha and beta connectivity indicative of reduced cognitive engagement with the task, as compared with the other two groups. Members of the LLM group also had a weaker sense of ownership of their work and struggled to answer questions about what they had written. The LLM users were consistently weaker in linguistic, neural and behavioural aspects.

While the study is far from a major conclusive longitudinal study, the evidence does call into question the educational benefits of using AI tools, as well as concerns about the quality of the work that is produced. Writing is thinking!

There is much public comment about AI replacing jobs or specific tasks within roles, and this is often cited as a source of productivity improvement. Often we hear about how junior legal professionals can be easily replaced since much of their work is related to the production of standard contracts and other documents, and these tasks can be performed by LLMs. We hear much of the same narrative from the accounting and consulting worlds. But if we automate junior roles, where does the high quality pipeline of senior people come from?

The greatest learning experiences come from making mistakes. Problem solving skills come from experience. Intuition is a skill that is developed from repeatedly working in real-world environments. AI systems do make mistakes and these can be caught and corrected by a human, but it is not the same as the human making the mistake. Correcting the mistakes made by AI systems is in itself a skill, but a different one.

The deskilling risk

It a rapidly evolving world in which AI has the potential to play a major role, it is appropriate that we apply the Precautionary Principle in determining how to automate with AI. The scientific evidence of the impact of AI-enabled automation is still incomplete, but more is being learned every day. However, skill loss is a serious, and possibly irreversible, risk. The integrity of education systems, the reputations of organisations and individuals, and our own ability to trust in complex decision-making processes, are at stake.

There is a lovely book called “Shop Class as Soulcraft: An Inquiry into the Value of Work”, by Matthew B. Crawford (2010), that celebrates the pleasure of working with one’s hands and doing the work oneself. When it has never been easier to automate work, taking the time to properly value the benefits of work from a cognitive, psychological, sociological, educational, and developmental perspective, has never been more important.

There have been very impressive advances in AI, but I would argue that there is still a long way to go on the dimensions of human-level intelligence, common sense and human-level reasoning, for us to blindly automate just because it is possible to do so. Just because we can automate doesn’t mean that we should.

Barry O’Sullivan is a professor at the School of Computer Science & IT at University College Cork, founding director of both the Insight Research Ireland Centre for Data Analytics and the Research Ireland Centre for Research Training on Artificial Intelligence. He is a member of the Irish Government’s AI Advisory Council and former Vice Chair of the European High-Level Expert Group on Artificial Intelligence.

Don’t miss out on the knowledge you need to succeed. Sign up for the Daily Brief, Silicon Republic’s digest of need-to-know sci-tech news.


Source

Visited 1 times, 1 visit(s) today

Recommended For You

Avatar photo

About the Author: News Hound