| Science and Research

Facts or fakes? Social bots, fake news and social media

Facts or fakes? Social bots, fake news and social media

The dark side to public expression of opinions – social bots, fake news and misinformation – is a hot topic in the media, politics and public discourse. The feeling that social networks are being used for manipulation and propaganda is increasingly overshadowing their positive aspects. In this interview, Prof. Heike Trautmann and Dr. Christian Grimme from the University of Münster talk about their latest research on the subject.

Dr. Grimme, if manipulation and disinformation are circulated on social media platforms, should we stop using social media?

Christian Grimme: No, social media is often very helpful for personal and professional uses. We use it to stay in contact with friends and coworkers, and to share information and cooperate on these channels. In this way, social media connects people from all over the world and allows efficient cooperation. It allows everyone to take part in more or less open political discussion. It is also important from the perspective of a professional customer service – just think of the interaction between customers and businesses. The overwhelming number of users and the complete integration of social media in our lives show that it is extremely useful to people.

We often hear that there are social bots, fake news, misinformation and even propaganda on social media. It all seems to be taking a turn for the worse.

Christian Grimme: Yes, it might look like that. But think about the possibilities that it opens up for people who don’t have the right to free speech. For them, social media is a valuable platform on which they describe their humanitarian situation or their political opinion. Social media is not bad in principle only because everyone can participate – including people whose opinions we don’t share. What we should be aware of, and possibly need to combat, is the misuse of technology from the back end, for example the use of social bots.

Prof. Trautmann, could you briefly summarize what a social bot is and what it does?

Heike Trautmann: The concept of a social bot has not been well defined so far. However, some time ago we attempted to provide a definition in an article: We consider a social bot an agent or program that can interact with social media users. It is essentially a robot that hides behind a social media account. Since social media profiles are, by definition, an abstraction of the human personality, it is easy to replace the human with a robotic counterpart without anyone noticing.

Social bots are often used to distort the perception of reality. A very simple scenario is increasing the popularity of a certain topic. When lots of accounts like, share or post about this topic, social networks consider it very important and increase the visibility of the topic for other users, even though machines initially promoted it.

But that’s not the only way in which social bots can manipulate societies. How can social bots change an election result, for example?

Christian Grimme: We’re not sure whether that’s even possible, although some researchers claim that there would be clear connections here. However, we can imagine some strategic scenarios in which social bots play an important role. Think about an attack through misinformation that is indirectly, and possibly accidentally, circulated by the media. Journalists increasingly rely on social media to find stories. Although there are still ethical principles relating to how journalists write stories and collect and verify sources, they increasingly resort to subjects that are prominent on social media. If we’re now able to manipulate trends, content and reactions to a particular topic on social media, we could possibly bring this topic into public discourse through journalistic multiplication. In this case, we’re not only addressing social media users, but rather the whole of society. In particular, people who don’t use social media.

That’s an alarming scenario, but is it really that easy?

Christian Grimme: No, of course not. This is an imaginary scenario. Although, we have often seen the power of information that is circulated on social media already. Think about the German satirical magazine that published misinformation about a schism in the German Conservative Party. As this would probably have meant the end of the present government, some news agencies – including Reuters – reported it, clearly without further verification. This resulted in some political turmoil and reactions on the stock markets. I think that this illustrates the potential. But manipulating society is no easy task. It requires lots of preparation, some luck and, most of all, time. However, if given sufficient time, misinformation, fake news and social bots can be used to achieve a long-term change of opinion.

Does that also mean that manipulation, fake news and social bots can have economic effects and impacts on customer relationship management?

Heike Trautmann: Researchers at the German Association for Security in Industry and Commerce (ASW) have identified some very interesting attack vectors in a study. In addition to the manipulation of the stock market through false information or sentiments on social media used to manipulate automated trading algorithms, there are also very specific vectors that are used to attack businesses. The loss of reputation is an important issue. Along with fake product reviews, you can also destroy the reputation of a business as an employer or reliable business partner. As you can see, this has serious consequences for customer relationship management in both the B2B and B2C sectors. However, the basic concepts that lie behind these attacks are the same as in a social context: distorting content on social networks in the hope that it will be circulated and seen by the general public.

What can we do about social bots, fake news and misinformation? At the moment there is some research, including yours, about social bots. Why can’t we ban them from social media?

Christian Grimme: As a matter of fact, it’s not easy to identify social bots. To do this, colleagues have proposed some mechanisms that range from very simple indicators to approaches that use machine learning. But in our experience these indicators, however sophisticated they may be, aren’t enough to reliably detect the automation behind social media accounts on their own. You can imagine that it can be helpful to count the posts made by a single account in extreme circumstances. But it doesn’t make much sense to set a fixed threshold that differentiates bots from people.

Big problems also arise with machine learning approaches, such as with the well-known Botometer formerly known as BotOrNot. In order to train a detector like this one, manually analyzed accounts of people and social bots are needed to identify patterns. Although this can help to find some patterns in the available data set, it ignores how accounts develop over time. In one study, we noticed that bot accounts had changed so much after several months that the detectors were no longer able to classify the training data itself.

If it’s so complicated to deal with social bots, what is your approach?

Christian Grimme: To be honest, we think that it is incorrect to focus on detecting social bots. How does it help to know whether an account is automated, semi-automated or controlled by a person? As long as posting robots aren’t part of a misinformation strategy, campaign or propaganda attack, they aren’t fundamentally harmful. Some are even helpful, posting weather data or other interesting things. And is a group of people who are paid to carry out a coordinated attack on public opinion better than automated propaganda? No.

That’s why we think that it makes more sense to watch out for misinformation and manipulation strategies, instead of individual agents. When we find a malicious strategy, we can identify both internet trolls and social bots from the top down. Therefore we don’t need a single indicator, but rather several that address the different aspects of a campaign and the agents involved. This set of indicators can also contain possibilities for detecting automation, but that’s not the focus.

And is that also a part of your research project?

Heike Trautmann: Yes, we are involved in many projects that approach these issues in an interdisciplinary way. One example of this is “PropStop, a project in which computer scientists, statisticians, social scientists and journalists work on detecting automated propaganda. It’s specifically about identifying digital propaganda, and not just bots. Furthermore, the project “DemoResil, which is led by a colleague from psychology and communication, focuses on mechanisms for detecting fake news and strengthening resilience to misinformation.

The upcoming project “Moderat” (Moderate) will tackle the issue of hate speech at a technical level to support moderators of online news groups. And our cooperation with the German security authorities shows that social media is increasingly considered within cyber security as critical infrastructure.

In your view, what are the future challenges for research?

Christian Grimme: There’s a lot to get done; we’re only at the beginning of the research. We need to understand how propaganda operates on online media. This doesn’t just include technical aspects, but the psychological impacts that activities on social media have, as well as the implications for the economy, politics and society.

Think about it: Three years ago, misinformation on social media was still not an issue, at least not in Germany. Who knows what it will be like in three years’ time. That means that we have to incorporate future trends and technologies in order to predict new forms of manipulation. For example, what influence will developments in artificial intelligence have? We cooperate intensively with the European AI initiative CLAIRE in this area. Moreover, the question of social media as critical infrastructure will be the focus of the research. It’s therefore important to slightly shift our perspective on the area of cyber security. We cooperate with public institutions and authorities in this area.

Finally, we must strengthen the network of scientists and practitioners from various sectors in order to face all of the new challenges. That’s why we are currently developing a sustainable institution as a focal point for cooperation and a home for all of our research: The ERCIS Competence Center for Social Media Analytics. We coordinate all activities here and, just as importantly, ensure that practitioners like Karsten Kraume are involved. He works at Bertelsmann and sits on several boards such as ERCIS and RISE SMA.

Many thanks for your time.

Prof. Dr. Heike Trautmann

Prof. Dr. Heike Trautmann is Professor for Information Systems and Statistics at the Institute for Information Systems at the Westphalian Wilhelms University Münster and Scientific Director of the ERCIS Omni-Channel-Labs powered by Arvato. She is also academic co-director of the ERCIS Competence Center for Social Media Analytics and director of the ERCIS network.

PD Dr.-Ing. Christian Grimme

PD Dr.-Ing. Christian Grimme is a private lecturer at the Chair of Information Systems and Statistics at the Westfälische Wilhelms-Universität Münster and scientific co-director of the ERCIS Competence Center for Social Media Analytics. He is also project coordinator of the PropStop project, head of a study for the German Federal Office for IT Security prior to the Bundestag elections and initiator of the Multidisciplinary International Symposium on Disinformation in Open Online Media (MISDOOM) 2019.

Author: Editorial team Future. Customer.
Image: © sdecoret– AdobeStock

Tags for this article Digitization (167) Social Media (22)

82000+

team members

45

countries

70+

languages

500+

clients worldwide

We're global and local. So you can be close

We're global and local.

We speak your language. And we’d love to hear from you