We are not parrots – ChatGPT and what it means for communication management

When OpenAI, an American company, unveiled the latest generation of its AI language model back in November 2022, the move touched off a firestorm of discussion that is still going. The Generative Pre-trained Transformer, or GPT, functions as a chatbot, allowing people to engage in dialogue with an AI model. Trained on 500 billion information units from sources such as the Common Crawl web archive and Wikipedia, ChatGPT can respond to questions with “human-like language,” as ChatGPT describes itself. The quality of the answers still swings wildly from brilliant to nonsensical, but the AI learns from each new conversation. In addition, ChatGPT is not yet connected to the information spheres of the Internet, so its knowledge only goes up to 2021. But it is only a matter of time before the bot gains broader data access. Meanwhile, competitors that promise even better performance, such as Jasper Chat, Neuroflash, and YouChat, are waiting in the wings.

This means there are now machines in existence that, at least at some point, will be able to access the full breadth and depth of human knowledge to respond to questions without any semantic understanding, but still with trained logic following the rules of language. What ramifications should we expect this to have for our work as communicators? PR consultant Matthias Biebl expects to see a “tipping point in communications.” AI will not be able to replace human creativity and strategic thinking, but it will take on some editorial duties, leaving more time for people to concentrate on key tasks. That raises two important questions: First, what exactly are we getting into when we leave information research and language production to AI? And second, how will people acquire the skills needed for more sophisticated tasks in communication management in the future?

Gary Marcus, an American AI expert and a professor emeritus of psychology at New York University, views the launch of ChatGPT as “AI’s Jurassic Park moment,” warning – as the audience recalls images from Steven Spielberg’s smash-hit movie of the 1990s – against the fatal consequences of using an unreliable technology that is open to abuse. Scholar of media law Rolf Schwartmann makes a similar point when he notes that ChatGPT cannot do things like distinguish between a verbatim quote from a judgment rendered by the Court of Justice of the European Union and an interpretation of the same passage. Schwartmann says the fundamental task in this regard, if we do not wish to give up responsibility when using the program, is to “set new rules for the relationship between people and technology.” One solution could be requiring that a transparent referential algorithm be offered.

But other critics, like information scientist Timnit Gebru, go even further. Her warning about the risks of the technology on which ChatGPT is based, known as natural language processing, made waves on the internet. In a paper co-published in 2021 with other scholars, she warned against the “dangers of stochastic parrots” – linguistic models that are equipped with absolute information access and draw on significant financial and/or natural resources, but only ever document the status quo, including social prejudices and power relations. Or, to put it more succinctly, ChatGPT has all the information, but it does not actually know anything. At least for now, it cannot produce any new thoughts beyond mechanical coincidence.

No problem, we might argue; after all, it’s enough if the machine simply gathers information for a communication pro as a first step, handles the time-consuming work of crafting pieces of text, and offers initial drafts for more complex solutions to problems. But then the question is how future generations will gain the skills needed to carry out the more sophisticated tasks involved in things like strategic communication planning in the first place, if not through extensive independent reading and writing, for example. ChatGPT and its ilk are poised to shake up university education as well if it is no longer possible to be certain that written work is not plagiarized. In a recent analysis of the role of AI and how it will impact the PR field, PR scholars Alexander Buhmann and Candace White conclude that “professional communicators may currently be buying into processes they do not fully understand”.

There is no doubt that ChatGPT also poses questions about the future of communication management. But the answers will need to be more nuanced than simply drawing the dividing line at simple tasks for AI and complex creation and/or strategy for humans. Only if we fully explore the actual potential for increased efficiency and effectiveness lying in synergy between humans and AI will we be on the cusp of a fundamental next step in the evolution of strategic communication management.

Leave a Reply

Your email address will not be published. Required fields are marked *

I accept that my given data and my IP address is sent to a server in the USA only for the purpose of spam prevention through the Akismet program.More information on Akismet and GDPR.

This site uses Akismet to reduce spam. Learn how your comment data is processed.