by Howard Green

It’s no exaggeration to any person or any robot to claim that Artificial Intelligence has arrived. It has done so without its previous aura of mystification and impracticality. Now, with applications such as ChatGPT, it seems to be finally materialising into what technology like this can truly be, and allegedly, productive for all of those who use it.

There has consistently been an irrationality about AI, spawning the idea amongst people of apocalyptic scenes of sentient robot overlords in a post-singularity world. There are legitimate concerns about the use of AI, especially in activities that require a level of human judgement, correctness and authority. But there has been an overcaution (which in some cases has been not entirely unfounded) in certain regards to this, with the sometimes purposeful rejection of AI technologies in fields where it could possibly be beneficial alongside human productivity. None more so has this overly ‘cautious’ behaviour seeped than into both academia and general education.

This is a manipulation of the truth, and a severe undermining of how students work and overmines how effective AI is as a tool for the lazy.

Anyone likely studying postgraduate, undergraduate, or even general educational courses has probably witnessed a level of paranoia from their academic tutors and peers about the risks of using AI in education assessments. Some even advise to avoid the applications altogether. This illicit ‘substance’ has spread throughout our educational institutions, supposedly causing academic harm to whoever dares touch it.

This country’s educational institutions are always slow-acting, whether on the encouragement and use of new technologies or not. In part, because of the profit-motive entrenched in our universities. The idea of a student ‘conning’ their way to a degree through the use of AI to write all their essays would put any university’s vice-chancellors head into a spin. However, it is not likely that this is actually the case when it comes to the application of this technology.

Behind this is a forced narrative that education is becoming a dead purpose. That the young people who are still in education have now garnered a tool to avoid the absorption of actual knowledge, the perils of effort and a dive into the warm pool of achievement. This is a manipulation of the truth, and a severe undermining of how students work and overmines how effective AI is as a tool for the lazy.

As any student who might’ve even tried to use a programme such as ChatGPT will find, the quality of work that this particular program provides is far from useful. The likelihood that a university student can type in a command and receive an essay that surpasses the quality of one they could’ve achieved through a less than rigorous all-nighter is miniscule. Even when it comes to trying to reference or find studies that fit an argument, it will simply make them up into believable sounding research topics and the names of real academics. Our ‘great’ technology, when met with the expanse of the entire academic knowledge accessible on the internet, decides it cannot be bothered and would much rather make it up.

A Turing test between AI and a student who’s gone out a little too much has not yet resulted in victory for the side of the robots. The current limitations of this new technology have been met in the academic sphere. Of course, as it is AI, it is capable of development seemingly on its own, through repeated and varied use,  but it must be addressed that what is accessible AI in today’s form is nothing more than an advanced search engine, even in its endeavours of imitating human learning.

Academia and wider education continue to question the education legitimacy of tools such as search engines. The thought of more refined access to knowledge than searching through the pages and pages contained within a library also got the skin crawling of those who ran universities when search tools were first introduced, sometimes still lingering to this day. AI improves upon this, but in the form of  a more generalised refinement of information, instead of the trawling through of web pages that the user would have to otherwise do themselves.

So, what should our conception of AI be for those in education? It’s not the robots stealing our educational institutions, nor the slight advancement of Google. This is a technology we have developed and which can be best used in its current form for the purposes of summarising and paraphrasing. An ever sticky problem for any student is understanding a concept, but lacking the words to describe it without outrightly plagiarising. AI can help navigate that initial difficulty. Even so, this should be applied tactically and leniently.

How can educational institutions properly address this prospect, then? They should, first of all,  avoid panic and misinformation.

On a surface level, the possibility of programmes such as ChatGPT becoming dominant in education and specifically educational assessment is looming. So is it a dead purpose? No. Many still subscribe to the traditional, outdated idea of education – especially in our primary and secondary schools – as absorption of knowledge, and the spitting out of it on assessed command. Ultimately, AI will simply become a tool not of the attempted deception of knowledge, but one for the commanding, organising and searching of it. Much like search engines or even, in a broad sense, libraries.

How can educational institutions properly address this prospect, then? They should, first of all, avoid panic and misinformation. Inevitably, systems that are already operated by universities to detect plagiarism will catch up and find the commonalities that are present in AI outputs. Until that day, a more rational assessment should be put forward: without the readiness or ability yet to ‘adapt’ these technologies into educational advancements, students should be shown how to operate and to know their limits with AI. Fear mongering about the possible consequences of ‘getting caught’ using the system’s irresponsibly is more likely to end in a lack of ambition to improve educational output, or doubling down on irresponsible use. The denial of this type of technology, and its advancements in the near future, will only beget apathy in education. 

Featured image “BETT and Education Show 2019: Lots of robots” by p_a_h is licensed under CC BY 2.0.

The Norwich Radical is non-profit and run by volunteers. You can help us continue our work by becoming a supporter. All funds raised help cover the maintenance costs of our website, as well as contributing towards future projects and events.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.