By akademiotoelektronik, 25/10/2022

Why so much agitation on the confidence and reliability of AI?

Swiss Digital Trust Label, Trust Valley, or Center for Digital Trust, the theme of confidence and digital is in vogue, including in Switzerland.These initiatives suggest that confidence is a catalyst for the success of artificial intelligence deployments.And yet, no other than Professor Joanna Bryson believes that: "No one should trust artificial intelligence".What to wonder about all this hay around confidence in AI or in the digital world.So, the blockchain, the crypto industry and the labels will solve all our confidence problems?(Spoiler alert: no, no, in a way).

First question: can we trust the AI? Short answer: Yes.Long answer: it's complicated.

The question of knowing if we are really able to give our confidence to machines is the subject of an animated debate between those who have faith ("yes, it is possible!") And those who doubt ("heshould not trust AI! ”).We are putting ourselves on the side of those who believe in it.We start from the principle that confidence between humans can be transposed and that, although different, it is in many ways similar to the confidence that these same humans say they have in the machines.

In human relationships, we can understand confidence as an adaptation strategy in the face of risk and uncertainty.This strategy is in the form of an evaluation.The skills of the person we will trust will be examined for example.At the same time, the one who trusts is in a vulnerability position.As in any relationship, we run the risk of being injured.In other words, no risk without risk.

This vulnerability is just as critical when it comes to confidence between man and machine.Giving confidence in technology is expecting a certain result, a certain behavior if we use it.The reliability of a system, its poor performance or its unclear processes as well as other factors can alter the confidence that is given to it.Confidence and reliability are thus distinct concepts, unfortunately often confused.

Three things are therefore important to understand: confidence is an attitude towards a human third party or machine (1) which is supposed to help achieve a specific objective (2) in a situation of uncertainty(3).I can trust Amazon to deliver my package in time, but not to respect my private life.

To return to the question asked, we can therefore answer that we are indeed capable of trusting the AI in a concrete context.Is this a reason to do so? This is another question ...

Second question: should we trust AI?Short answer: no.Long answer: it's complicated.

Pourquoi tant d’agitation sur la confiance et la fiabilité de l’IA?

From a practical and normative point of view, the question of whether we have to trust AI is much more interesting, because it moves discussion on the theme of reliability.While confidence is a human attitude and a complex latent variable in psychometric terms, reliability is a much more technical question linked to the properties of technology.When Joanna Bryson claims that no one should trust AI, her message could not be clear: do not use the AI (and quantity of other systems) systems blindly.

As an example of blind confidence that has been wrong, we often quote the case of a very educated Tesla driver, who lost his life in an accident, because he played and did not look at the road at all, thus trustingsystem.We will probably never know if the fatal accident is the consequence of an excess of confidence, the deceptive marketing promises of the manufacturer, the lack of intelligence of the driver or a combination of these three factors.Anyway, educating people so that they adopt zero confidence in the machines is most likely the safest way to avoid injuries.

Do not trust and deprive yourself of a system that is likely to bring better results is not the panacea.The ideal would be to promote "calibrated confidence", in which the user adapts his level of confidence (if and how he will rely) according to the performance of the system in question.Depending on, or despite performance, because we know that many companies exaggerate or hide the real capacities of their products (advertising arguments deserve to be put to the test).

Thus, calibrating our confidence can save lives, but in the event of uncertainty and high risk in the human-machine relationship, it is better to adopt zero confidence (it is better to heal)).

Third question: should we stop talking about confidence?Short answer: Yes.Long answer: it's complicated.

In our opinion, when we say that we should not trust the AI, the most important message is as follows: think before acting.But thinking is exhausting.Wouldn't it be great to be able to blindly trust a business so that it respects my private life and give me my products in time?Sorry, but the blockchain is here no help and don't even try to make us look at a crypto-solution.A label can be a good start for all things that are not regulated by law.But do we not make things even more complicated by adding another actor in the equation of trust that we do not already understand entirely?Should we in the future investigate confidence in labels as an indicator of confidence in machines?

In the end, confidence as attitude is an interesting subject for psychologists.But when we talk about machines or characteristics, you have to use the right terms and focus on reliability, because this is what we can best control.

Complementary question: what about law and confidence?

LABELS prove useful to guarantee reliability, but are the laws not a better option?Should we not devote all our efforts to laws and regulations?Is this our only real indicator of reliability?First, yes: we must devote a lot of efforts to laws and regulations to guarantee the responsibility of designers.Second, no: the equation "law and confidence" is a false response.The laws should not have the reason to increase confidence, but rather to promote the responsibility and the proper functioning of the company.The fundamental aim of a law remains to establish standards, maintain order, solve disputes and protect freedoms and rights, and not strengthen confidence in people or in AI.

Conclusion: don't worry about details

Ethical laws and labels do not solve the question of confidence.In fact, it could even be that the formula "the more we trust, the more we use technology" is not verified.People rely on the less trustworthy products for the most irrational reasons - rational homo status has lived.Social to the deepest cells, today's human favors convenience and sociality.We love humans, we like to weave links and, having no other behavioral knowledge to mobilize, we even humanize the machines.

This anthropomorphism is not that bad, provided that agents are not designed to designate people.Admittedly, the formula "Trusted" is an anthropomorphic language, but it has the merit of communicating a message instantly understood by almost all those who have an idea of this vague feeling of confidence.If you were talking about explaining or responsible AI, only a very small fraction of people would include.

Thus, although the terms "confidence" and "reliability" are the subject of legitimate criticisms in the context of AI, they can also be welcomed.They allow everyone to understand the main reasons to build and use these complex technological objects and their impact on society.Perhaps we would all be better to take things in a more relaxed way and to consider AI worthy of confidence rather as a vision than as an assertion to technical precision.

Tags: