The ethical contextualization of trust

The ethical contextualization of trust

In the blog series I presented here [1][2], I tried to establish the connection between the human/user behavior and the online trustworthiness of systems, products, people, places, information… I also pointed out the various ways in which system designers try to address trust-related issues, the constrained context in which this makes sense to be done, and the vast set of contexts in which it brings contradictory and undesirable results. The main reason for this is that trust is mainly considered as a systemic variable whose value can be controlled and tweaked from outside, instead of seeing it as a social and organic value that emerges from the interactions among the entities in the system. Therefore, through these blog posts, I try to offer another view on what ethical design should hold as requirements, and what expectations for system/entities behavior it should try to set. In this blog post, I take this discussion as a starting point and try to reason around the proposed approach.

From the analysis in the previous blog posts, it became clear that ethical system design is neither just about technical design, nor about regulatory interventions in that design. It is also not just about satisfying the requirements for transparency and accountability, nor about working around the algorithmic biases of the employed technologies. Instead, ethical design is much more about the soft social traits and values that are being cherished by the actors in the system, as well as the entities’ awareness of how these traits manifest. Just like in the real world.

Let’s take an example.

We would rarely doubt a child to have ill intentions towards someone’s welfare. We even feel pleasured and happy instructing children when they make some mistakes. Moreover, we approach these situations with a certain amount of humbleness and learn many things from our children as well. However, we would never rely on a child to set the structure for the mechanisms used to run the society.

This line of thought already implicitly contains several types of trust: trust that children are not ill-intended; trust that children hold information that can provide learning experience; trust that children are interested to learn from their mistakes and, more importantly, to learn from us; and distrust that they hold sufficient knowledge and ability to structure the socio-economic and political systems of our society.

In a way, when we speak about children, we speak about unconditional trust, which stems from the fact that in this context, trust is often bi-directional (it is not only us who trust in the children’s will to learn, but it is also them who trust that we hold the knowledge that they need to help them correct the mistakes they make). On the other hand, when we speak about distrust, we do not think of lost trust, but rather of an unreached level of trust, for which we are willing to work and invest from our time, capability and resources.

The “distrust” in this scenario also contains several implicit (and intangible) requirements, which, much like trust, fall under the domain of ethics. Some of these requirements are: hope (that the desirable trust level will be reached) [3], which is necessary to feed our good will to invest resources; forgiveness (for the mistakes made in the process of learning), withholding from judgment (in order to make a positive change and maintain our own trustworthiness), forgetfulness (of the mistakes that we are not able to explain or understand), humbleness (to confess that we are not all-knowing and all-understanding) and accepting that the learning is bi-directional, tolerance (of the specificities of the others that may prevent them from reaching the desired trust level for a longer time, if ever), robustness (leaving space and possibility not only for first, but also for second, third and so on impression), changing mind (due to wrong “impression” that results from the same manifestations of different states: egoism and over-enthusiasm, unavailability and deliberative thinking, caution and inertness, etc.), belief (that there is a vast set of other perspectives to view the same situation. This often goes hand in hand with humbleness.), etc.

It is clear these traits are often inter-twined. It is precisely this intertwining that is necessary to be understood in order to determine some tangible markers that can help qualify the interactions in a given system and ensure that certain trust thresholds in their exchange of information/knowledge are met.

Some additional questions that also arise at this point are:

Q1: What are the points at which the child-like trustfulness appears and disappears?

Q2: What are the traits that enable and disable this?

Q3: What are the factors that contribute to it?

Q4: How do we go from intangible to tangible – from awareness to requirement, and employ this knowledge in a pragmatic manner?

Q5: Which part is transferable to a socio-technical context?

Q6: How do we make a successful collaborative match between the online platforms and algorithms on the one side, and the users/humans on the other side?

We will try to address some of these questions and relate them to the context of ethical AI and ethical system design in a future blog. We well then propose how such knowledge they can be employed and integrated in a pragmatic manner. Certainly, the entire approach should be based on interdisciplinary and collaborative effort among many professionals: technicians, educators, legal experts, socio-economic experts, psychologist, neuro-scientist, citizens, social activists, etc.

From the reasoning presented here, one thing is certain: if we are not able to see and treat online interactions (regardless of whether we are talking about humans or algorithms) as learning experiences, it will be very hard to speak of ethical design at all. In the absence of the social traits listed above, we can only speak of “expectations for guarantees”, “calculative negotiations”, or… well, you already get it.

[1] https://www.concordia-h2020.eu/blog-post/the-causal-loops-of-online-trust-user-behavior-and-misinformation/

[2] https://www.concordia-h2020.eu/blog-post/the-market-for-lemons-on-the-market-for-ideas/

[3] Note that this also contain elements of utility and pay-off, which are currently the only one accounted for in the design of trust systems

(By Tanja Pavleska, Laboratory for Open Systems and Networks, Jozef Stefan Institute)