Holding, taking a stand, grounding – communication responsibility in the intelligent age

Surveys such as the “European Communications Monitor 2025/26,” the “FTI Communications Heatmap 2025,” and the study by CommTec AG and GK Personel Consulting on the “Communications Profession in the Digital Change” show our discipline confronted with two key challenges: the value orientation of communication in a world of questioned truths and the effective use of AI as a transformative technology. Debates surrounding communication ethics and technological change have accompanied modern communication management from the outset. The crucial difference now is that these two dimensions are inextricably intertwined, and communicators hold a key to societal stability.

The online portal Statista expects the AI market volume to exceed € 320 billion by 2026, potentially rising to more than € 1.5 trillion by 2031. According to tech database provider Crunchbase, more than US$200 billion was invested in AI in 2025 alone. While the Allianz Risk Barometer 2026 indicates that AI has risen from twelfth to second place among the biggest business risks within two years only – including reputational risks such as those arising from exaggerated benefits, or so-called AI washing – the Intelligent Age, in which AI permeates all areas of life, is nevertheless dawning at breathtaking speed.

And the pace of innovation remains extremely high: After generative AI learned to understand the world based on probability, AI agents are now intervening as independent actors, and so-called world models that predict reality are on the horizon. The pressure for rapid implementation in the field of communication is particularly pronounced because it is perceived as an area with a comparatively high tolerance for error. If something is “just PR” and there is simultaneously high-cost pressure, then one can quickly make the step from a helpful interface – AI as a tool for communication management – to AI first – AI as the operating system of communication management altogether.

The question of ethical implications is simultaneously the question of communicative responsibility in the Intelligent Age. Christian Bauer, head of the Institute for Design and Ethics at the Saar University of Fine Arts, has convincingly demonstrated that generative AI cannot be programmed with ethical reflection or even moral action. At best, what is possible is “behavioral coordination between autonomous masters and digital servants” resulting in moral “de-skilling”, because ChatGPT and similar technologies are not endowed with ethics but are statistics-based. These language models are “people pleasers” and thus cannot provide support and guidance during periods of uncertainty, as it is increasingly demanded from communication managers.

Nobel laureate economist Simon Johnson warns against AI as a “universal truth machine” and points to the pluralism of people — including those outside the mainstream, such as even “madmen or heretics” — as a superior principle of progress. The circumstances under which cognitive processes are performed are at least as important as the outcome. A recent study from the University of Hohenheim on the influence of AI on the meaning of work shows that tasks requiring more independent cognitive effort are perceived as particularly meaningful.

At the same time, researchers at the University of Pennsylvania have found evidence that AI users are less likely to draw comprehensive and independent conclusions during internet research, conclusions that then also spark high interest among others. While AI reduces the effort involved, it simultaneously diminishes what the Würzburg educator calls “desirable challenges to the learning process.” This lack of friction and resonance hinders the development of attitudes that are essential for communicative resilience in an environment of contradictory realities. In the context of the Munich Security Conference, there was speech of the future need for “cognitive resilience” in society.

If communicators are to provide support and demonstrate a clear stance, then they need to be grounded. Truths and theories are of little help in the current phase of fragmentation and polarization — regardless of their origin. In the late 1980s, the American sociologist Ray Oldenburg, in his theory of “third places” described low-threshold meeting places like cafés, bars, and bookstores that, in a sense, physically support people. If “third places” exist for communication management, then they are not (only) located in the digital realm, but also in factories, laboratories, canteens, newsrooms, and studios.

Leave a Reply

Your email address will not be published. Required fields are marked *

I accept that my given data and my IP address is sent to a server in the USA only for the purpose of spam prevention through the Akismet program.More information on Akismet and GDPR.

This site uses Akismet to reduce spam. Learn how your comment data is processed.