The causal loops of online trust, user behavior and misinformation

The causal loops of online trust, user behavior and misinformation

From establishing simple connections online, through enabling e-commerce and risky transactions, to conducting online voting, trust has come to serve the online communities the very same purpose it serves the traditional world: facilitator of our online presence and interactions, of community building and, ultimately, enabler of democracy. In contrast, misinformation has often been denoted as one of the greatest enemies of democracy. The ways in which it affects the democratic flows and values in a society may vary, but all of them have one common denominator: undermining trust – in people, expertise, politicians, institutions, systems and the society in general. So how exactly is misinformation related to the loss of trust and the rise of distrust online?

As a researcher at the Laboratory for Open Systems and Networks at Jozef Stefan Institute, I’ve been studying trust from both socio-economic and computational aspect. This research, spread through a wide set of contexts, is directly related to cybersecurity and has thus become an integral part of the CONCORDIA knowledge base.

When judging about the veracity of information (be it on a topic we are familiar with or one that we have no insights into), we usually base the trustworthiness of that information on the reputation of the information source and the credibility of the “Thinker” (when the source is different from the Thinker; e.g., a medium can be the source, whereas an expert/a doctor/a politician is the Thinker).

The problem becomes more complicated as information passes through its intermediaries from the Thinker to the information consumer (the reader). For instance, a Facebook or a Twitter profile may share a certain article published on another medium (a news company website, an Internet portal, a personal blog, etc.), which could have (directly or indirectly) taken that information from the Thinker. In that case, in order to decide whether information is trustworthy or not, the consumer should not only question the trustworthiness of the FB profile that shared the information, but also those of the medium that indirectly shared it, the medium that directly transmitted it, as well as the credibility of the Thinker itself. This is, of course, something no one does in reality, but it is also something that should not to be done. Otherwise, we would have total paranoia and distrust, keeping ourselves and much of the society into a halt. That is why trust is often referred to as “the glue that keeps societies from falling apart”, and this is how the notion and its role translate online as well.

What we actually do in the above scenario is one of few things: if information is in line with our beliefs and expectations, we immediately accept it as the truth. Our brain is so used to doing this (in a process known as “minimizing the cognitive dissonance”), that we think we do it unconsciously and therefore often don’t question our own judgment. Otherwise, we either mark it as false (potentially marking the information source as non-trustworthy as well) or we continue the “investigation” and research on the topic through collecting conflicting evidence and comparing the arguments in order to choose the most probable one to be truth. For the majority of information consumers, this is also a too complex process, mainly because they do not have the interest (utility, pay-off, etc.) in performing those steps. What the majority of the consumers do (as a shortcut) is checking what the like-minded people decided about that information (to trust it or not) and accepting their truth. The probability that the like-minded, in turn, “borrowed” the truth from their like-minded contacts is also very high. This is what is commonly known as the phenomenon of “echo-chambers”, which is closely related to the “bandwagon effect” resulting from the cognitive biases in the users’ behavior. The common issues all of these share is the dependency of opinions on a common information source. For example, in social networks, this common information source is very often the so called influencer.

Therefore, many of the social media platforms are trying to approach the problem from several angles: fact-check the offered information, check the trustworthiness of the information source and offer alternative opinions and sources on the subject. This, in turn, requires trust in the social media platforms: that they are addressing the problem in the proper way, that they rely on the proper datasets, use the adequate technologies, preserve user privacy, etc. In addition, trust in the employed technology (represented by artificial intelligence and the underlining algorithms) is required. Finally, proper use of this technology from the side of the user is also crucial to “close” well the circle of trust. Clearly, solutions to counter distrust and misinformation add yet another level to the need for trustworthiness and yet another cascade into the computational support of our trusting decisions, ultimately creating the very same problem they have been designed to address…much like a cat-and-mouse race.

The reason for this is that trust is often seen as a mechanistic property of a system – as a tangible result and a property embeddable at design time. This would be the case if we were dealing exclusively with machines that have pre-determined ranges of operation and predictable behaviors, but it is questionable if we would be speaking about trust then. With people in the core of our systems, it is essential to recognize that trust is an emergent property which results from properly designed systems and adequate interaction rules. As such, many other “human” properties need to be enabled to allow trust to thrive, like: forgiveness, forgetting, sense of social reputation, etc.

It is important to note that the above is just the tip of the iceberg, as information is just one of the assets critical to account for when forming trusting decisions. It is often even a product of a wider set of factors linked to: the socio-economic landscape, the political context, the underlining business models, the media policies, the regulatory practices, etc. All of these are interdependent and these dependencies should be taken into account in the effort to facilitate trusting decisions and to enable trust to thrive online.

To summarize: trust is an enabler, but an invisible one; misinformation is a disabler, but a visible one. We are often switching in our perception these notions and consider trust as a tangible property that could be embedded in a system fighting an invisible enemy. And this needs to be changed. In this blog post, we only identified the problem, but this is also an important part of solving it. 😀

In one of the next blog posts we will touch upon the design principles for such systems and present the results from the piloting of one such methodology (in the context of regulatory frameworks and policies for misinformation) in 24 EU countries. Moreover, we will address the exacerbating effect of crisis (such as the current COVID-19 pandemic) on the trusting decisions and the impact of context on the overall system supporting those decisions.

So if these are some of the topics that giggle your attention, make sure to follow us on our channels:

(By Tanja Pavleska
, Laboratory for Open Systems and Networks, 
Jozef Stefan Institute)