Does ChatGPT signal the death of arts, literature?

Published date23 February 2023
A bot with knowledge that's hefty

It can answer with ease,

All your queries and pleas,

And do so with answers a plenty.

ChatGPT is nothing if not confident. After all, it wrote the above limerick all by itself — albeit with specific prompting and some editing from me.

An AI language model developed by OpenAI, ChatGPT uses deep-learning techniques to generate realistic, human-like responses to queries, prompts and conversational starters. Trained on vast swathes of data — about 500 billion pieces of text gathered on the web — ChatGPT can understand and respond to a plethora of topics and questions.

It can hold a conversation, generate basic essays, compose poetry and even produce plot ideas on demand. It can also answer follow-up questions, admit its mistakes, challenge incorrect premises and reject inappropriate requests. ChatGPT is the next big thing in the relationship between humans and machines, apparently. Move over, Arnold Schwarzenegger.

As someone who considers herself a writer, a reader and a lover of art, poetry and literature, I have been bombarded with questions from friends, eager to know what I think of ChatGPT. Am I worried that my job as an Otago Daily Times columnist will be replaced? Is this the death of art and literature? Are humanities degrees now useless? My friends — themselves STEM majors, blissfully confident in their futures — wait in gleeful anticipation for my response.

According to some, ChatGPT represents the death of the college essay; according to others, it heralds the death of art itself. But I am not so sure.

Recently, I was hired by a friend to edit a series of blog posts and book chapters about the implications of Artificial Intelligence — ironically, written almost wholly by ChatGPT itself.

The writing was straightforward, chock-a-block full of facts, stats and figures. But it was also incredibly dull and repetitive — utterly lifeless and unengaging.

I wonder whether the novelty factor of this technology distracts from the fact that ChatGPT isn't particularly engaging, insightful or even accurate. The answers it produces are often surface-level at best.

OpenAI admits as such, writing in a recent blog post: "ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging, as (1) during RL training, there's currently no source of truth; (2) training the model to be more cautious causes it to decline questions that it can answer correctly; and (3) supervised training...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT