
The relationship between humans and machines has been a defining feature of the world of work in its various eras since the first industrial revolution. Now, there is intense debate about the role that artificially intelligent machines and their applications might play in production, creation, and decision-making processes in the future. Significant changes are expected in communications management in particular, because the unique characteristic of humans as designers, distributors, and interpreters of content and messages, which has existed since ancient clay tablets, is coming to an end. Yuval Noah Hariri argues that AI will be not only an artificial “myth-maker” but also a digital “bureaucrat,” whose dark side is more likely to be read in Franz Kafka’s “The Trial” than seen in James Cameron’s “Terminator.”
The use of innovative technologies has always played a crucial role in communications management. The history of strategic communications management must also be understood primarily as a history of technology—from the printing press to electronic mass media to computers and the internet. The resources that determine success have developed in opposite directions: While initially available information was scarce, but the attention of target groups was high, today we live in an age of information overload, which encounters people’s temporally and cognitively limited attention budgets.
In his recently published book“The Sirens’ Call: How Attention Became the World’s Most Endangered Resource,” Chris Hayes points out that this imbalance could be exacerbated by generative AI. Successful social media platforms follow a “slot machine model” of content delivery that bombards users with “millions of small interruptions, of which the ones that generate the highest attention are then repeated.” This triggers reward mechanisms in the user, with evolutionarily ancient brain areas releasing the happiness hormone dopamine. In his description of the path “From the Stone Age to the Internet,” neuropsychologist Lutz Jaencke points out that “today we all too often give free rein to these bottom-up impulses” and “are increasingly guided by emotions when selecting information,” while at the same time, in the virtual environment, “the direct social corrective is missing.”
The mental leap to the question of what motivates us in communications management to use certain technologies more than others is not a big one. The spontaneous reference to the effectiveness and efficiency of the respective instrument used can be too simplistic, as economic historian David Noble demonstrated in the 1980s in his social history of industrial automation with reference to the modernization of US factories after World War II. The decisive factors, he argued, were “the fascination with automation” and “the desire to be associated with the latest technology.”
In the case of the use of generative AI in communication management, it’s also important to note that we’re talking about a technology that can simulate the impression of an independent personality. In their 2024 contribution to the “Research Handbook on AI and Decision Making in Organizations,” Jen Rhymer, Alex Murray, and David Sirmon outline the vision of AI-based “synthetic stakeholders capable of learning and acting as independent agents in decision-making processes.”
This would put AI on almost par with humans, placing entirely new demands on the ability to monitor and criticize for the communicator in the loop. Researchers from Microsoft Research at the University of Cambridge, together with colleagues from Carnegie Mellon University, have recently demonstrated the challenge of this task in a study. Their conclusion: The use of GenAI shifts the critical thinking of knowledge workers, among other things, from “problem-solving” to “AI response integration.” The authors point to the need for considerable training to adequately appreciate AI input. In communications management, too, it is foreseeable that established (laborious) techniques for human acquisition of information and knowledge will lose relevance, while new (more convenient) technologies of artificial creation will gain importance. We are witnessing the beginning of an era of “co-intelligence,” as Ethan Mollick aptly dubbed the new simultaneity of human and artificial intelligence. Neuropsychologist Jaencke has succinctly summarized the challenges this poses for the motivation of communicators: “We must learn not to surrender to the obvious and emotionally stimulating, but rather to use our intellect.”