The market for lemons on the market for ideas

The market for lemons on the market for ideas

In a previous blog post i explained the link between trust, misinformation and user behavior. Two major points follow from that text:

The first is that trust is an invisible enabler of the socio-technical interactions within a system, whereas misinformation (or information disorder/pollution in general) is a tangible disabler of desirable system qualities. This means that trust is not to be seen as a quality that can be embedded during system design nor as a parameter that is subject to maintenance by the system designer/architect. Rather, it is an emergent property that depends on the interactions among all entities within the system, as well as their connections with the system itself and with the outside environment. Moreover, the manifestations of the trusting interactions are often unpredictable, the main reason for this being that humans are at the core of these systems.

The second major point is that all of the above is just the tip of the iceberg, because information (in general, not just misinformation) is only one of the assets to be accounted for when approaching trust (and security) online. The management of information depends on a wider set of factors that is not only linked to the technical and system design aspects, but also to the socio-economic landscape, the political context, the underlining business models, the media policies, the regulatory practices, etc. All of these factors are also interdependent and their inter-dependencies must be taken into account in any effort aiming to facilitate trusting decisions and to enable trust to thrive online.

Both of these points still remain somewhat unaddressed. However, while the second point is well-understood and shared by the research and the more general stakeholder community, the first point poses huge challenges. In a way, it even goes opposite of what is currently a popular paradigm in the digitalization processes and the technical world where trust is mostly considered as a system property that is both designable and controllable. Thus, trust (and reputation) models are one of the most popular algorithmic solutions embedded in the e-commerce platforms, the recommendation systems, the search optimization engines, etc.

It is important to state here that I am not arguing against the usefulness of these models: they do help deal with the information overload and they do assist the decision-making processes in a context of information asymmetry. I am merely stating that accompanying the term trust with them should be done with caution, as it may be misleading and give the (wrong) impression that designers of these systems can rely on the known social mechanisms from the traditional world to explain the trust phenomena in the digital world.

In one of my earlier studies I detected the manifestation of concrete cognitive biases (more concretely: framing effect, positivity bias, distinction bias, semantic discrepancy) that users exhibit when making trusting decisions online and analyzed their impact (Pavleska and Blažič 2017). The negative effect of user bias on the online ratings and recommendations is a well-known problem in the trust research community. The solutions for alleviating these effects have mainly been “looked for” in the algorithmic processing of information (in the form of explicit user feedback or through implicit behavioral patterns). This practically means that current solutions treat the symptom of the problem rather than its cause, which is a direct result from considering trust as a system parameter rather than an emergent property.

Little effort is invested in finding solutions that would address the cause of the problem. On the other hand, such solutions may be much “softer”, less expensive and provide greater impact in many aspects. For example, while the quality of user feedback is essential for improving the quality of information, proper education of users on all roles they may have through their online interactions precedes the algorithmic account of their behavior and is needed prior to any technical solution. In the case of trust systems, providing relevant user feedback is an incentive in and of itself, as it feeds the trust score with the quality information needed for the algorithms to process in the first place, while obtaining meaningful and truthful representation of the quality of the items that are subject to the evaluation. Understanding this provides an effective feedback loop that can make trust flourish online as a system quality rather than a parameter in and of itself. At the same time, it addressed the first point outlined at the beginning of this blog post.

In a recent study on information disorder governance in Europe, I analyzed 142 governance initiatives whose activities are directed towards combating misinformation and disinformation and protecting fundamental human rights. When asked about the major challenges they are facing with, at the top of the list of the initiatives was the low public awareness about the role of users in the online environment. Clearly, education (both formal and informal) should be among the top venues where solutions to these problems are being addressed. Furthermore, to facilitate this process, policy experts and regulators may approach technical companies with recommendations and requirements for enriching their codes of conduct with users’ education on their roles in the platforms. Moreover, companies may embed clear clauses in their ‘Terms of use’ explaining the importance of truthful feedback for the own sake of the users. This already shows how a multidisciplinary account of the issue can be given – which was the second point discussed at the beginning.

In the social research community, addressing such problems can be found in the seminal work by economist George Akerlof “The Market for Lemons: Quality Uncertainty and the Market Mechanism”, the mechanisms for dealing with information asymmetry and the effects of not doing so are well-explained. His work is in an economic context and shows how the quality of goods traded in a market can degrade in the presence of information asymmetry between buyers and sellers, leaving only “lemons” behind (in American slang, a lemon is a car that is found to be defective after it has been bought). As Akerlof puts it: “There are many markets in which buyers use some market statistic to judge the quality of prospective purchases. In this case there is incentive for sellers to market poor quality merchandise, since the returns for good quality accrue mainly to the entire group whose statistic is affected rather than to the individual seller. As a result, there tends to be a reduction in the average quality of goods and also in the size of the market.”

The last statement is especially important not only to understand why countering the effects of the lemon markets is important, but also how to recognize a context in which this is happening and may not necessarily be connected to tangible goods.

To study some of the effects of information asymmetry in the currently established trust systems, as part of my work on user biases i also investigated the reasoning behind the users’ decision on whether to buy a certain product based on the incentives they were given for the feedback they provided (Pavleska and Blažič 2017). One important result of that study was that introducing monetary elements in the incentives of the trust system may cause information to be treated as a limited or trading resource in the process of acquiring social capital. This, in turn squeezes the high-quality products and the trustworthy subjects out of the system, leading to what is here referred to as “the market of lemons on the market of information”.

But even aside from the controlled experiments, let’s consider our everyday experiences on the social networks…It is often the case that we see some information shared by people (especially ‘influencers’) on twitter, Facebook or Instagram about the results from a survey that proves some popular human belief or doubt. Regardless of the scientific depth and nature of the approach taken in carrying out that survey, media and newspaper publish its results as if it is even more important than a scientific study. The reason for this is that the results of these surveys come much quicker and are much more compelling to the wider audience than those from the scientific studies, which are subject of peer reviews, based on a scientific methodology, take much more time to carry out and are much less compelling to the wider audience. This affects negatively the research processes, in addition to misleading the general public with falsehood or unverified information. It is thus critical to understand that the behavior, but also the evolution of a system depends on the signals favored and amplified through user communication.

So what does it mean to assure quality of information? It means: to properly design the systems for their context of implementation, to establish the adequate interaction protocols among the system components, to enable adaptability of the system to new contexts (requiring well defined interaction protocols with the outside environment as well), to design proper feedback mechanisms that ensure a system behavior that is within the desired boundaries, to define redundancy mechanisms in case of undesirable behavior, etc. For that, the ability to rely on the methods and technology used in establishing all of the above is of critical importance. In depth analysis and recommendations for design of system supporting trusting decisions can be found in my paper: “A Holistic Approach for Designing Human-Centric Trust Systems” (Ažderska and Jerman-Blažič 2012).

Clearly, the Internet can be regarded as a market of ideas – sometimes the idea is visible and tangible, to the extent it is even monetized (e.g. Youtube videos can be monetized based on their views and popularity). However, most often the idea is not immediately observable and is bound to some digital characteristic we bring with ourselves while moving across the digital realm (e.g., the number of followers on the social networks, the Likes and Follows we perform, etc.). Some would argue that this is something we have in the traditional world as well, it is only represented and valued in a different manner. While this may also hold, there is a huge difference between the (seemingly) same values cherished online and offline. For instance, the types of impact, the manifestation of the consequences, the scalability of the actions and effects, as well as the possibility of taking corrective measures contribute to the incomparable differences between the two worlds. Therefore, straightforwardly mapping the traditional social concepts to the online world is equal to reductionism – just as is considering trust as a mechanistic system parameter.

To conclude, ethical system design is not just about transparency and accountability, but also about much softer and unsupported by policy characteristics. Am I going too far to ask for considering forgiveness, forgetting, belief and regret as part of digital properties that the digital realm is able to cherish? I don’t think so (at least I find some accomplices in https://core.ac.uk/display/38598456 and https://files.givewell.org/files/labs/AI/Josang2013.pdf ). What do you think? 😀

References

  • Ažderska, Tanja. 2012. “Co-Evolving Trust Mechanisms for Catering User Behavior.” In Trust Management VI, edited by Theo Dimitrakos, Rajat Moona, Dhiren Patel, and D. McKnight, 374:1–16. IFIP Advances in Information and Communication Technology. Springer Boston.
    http://www.springerlink.com/content/0744828h7136t146/abstract/.
  • Ažderska, Tanja, and Borka Jerman-Blažič. 2012. “A Holistic Approach for Designing Human-Centric Trust Systems.” Systemic Practice and Action Research, 1–34.
    https://doi.org/10.1007/s11213-012-9259-3.
  • Pavleska, Tanja, and Borka Jerman Blažič. 2017. “User Bias in Online Trust Systems: Aligning the System Designers’ Intentions with the Users’ Expectations.” Behaviour & Information Technology 36 (4): 404–21.
    https://doi.org/10.1080/0144929X.2016.1239761.
  • Pavleska, Tanja, Andrej Školkay, Bissera Zankova, Nelson Ribeiro, and Anja Bechmann. 2018. “Performance Analysis of Fact-Checking Organizations and Initiatives in Europe: A Critical Overview of Online Platforms Fighting Fake News.” 2018.
    https://doi.org/10.1017/9781839700422.014.
  • Smokvina, Tanja Kerševan, and Tanja Pavleska. 2019. “Igra Mačke z Mišjo: Medijska Regulacija v Evropski Uniji v Času Algoritmizacije Komuniciranja.” Javnost – The Public 26 (sup1): S82–99.
    https://doi.org/10.1080/13183222.2019.1696604.

(By Tanja Pavleska, Laboratory for Open Systems and Networks, Jozef Stefan Institute atanja@e5.ijs.si)