By akademiotoelektronik, 05/03/2023

We can imagine machines designed to open problems rather than solving them

#Ia#story

On the occasion of his recent visit to Paris, we exchanged with the historian David Bates, for whom the way we conceive our machines prevents us structurally from thinking about the future.

Mathilde Simon- 12 octobre 2021#IA #Histoire

It is in an an amphitheater refrigerated from the École Polytechnique, in Paris, on the occasion of a conference on the limits of artificial intelligence, that we met David Bates, historian of teacher thought at the University ofBerkeley, California.In his book to be published, An Artificial History of Natural Intelligence, he is particularly interested in the relationship that humans with technology through history, to better think about the place given to digital in our society.For this relative of the philosopher Bernard Stiegler, the way in which we conceive our machines prevents us structurally from thinking about the future.

Usbek & Rica : Vos travaux portent sur la possibilité de créer de nouvelles normes grâce aux machines.At first, can you come back in a few words to how the standards go on human life?

David Bates

There are three normative domains through which we can look at humans.The first is the biological domain, which makes us animals with limited brain and physical skills.What differentiates us from other animals is that we are aware that we inherit some of our habits and that we transmit them: we live within cultural and historical systems.And then there is the technological field.It often seems less important than the previous two, but humans are also defined by their ability to use tools, and especially to think and live through the technical systems that it creates.So there are no standards that define humans: in fact, it is made up of three spheres which, each, has its own standards.We often hear that our phones should be dropped to be more sociable.But there is no possible return to an original human without a machine: technology is part of humans, without completely defining it.

To manage to think about the norm, you say that we must already think about the crisis.To what extent does the future emerge from these moments of tension?

David Bates

This is the case in the biological and socio -cultural fields: evolution is based on change, but above all it requires rupture.There are no new living species without breaking.And if we look at the dynamics of history, we see from the Paleolithic normative systems that work very well, until they start to change slowly.And if things fail to adapt to this rate, then happens a radical change.For example, at the time of the French Revolution, all the standards associated with the monarchy were established by social, political and economic structures.When it collapsed internally because the system could not adapt to what was considered modern at the time, the standards were dissolved and the state with it, and that in the space of'a night !

David Bates

In France, you summon the concept of "state of emergency" in moments of crisis.In English, "Emergency" says Emergency, who comes from the word "emergence": threat times bring out things, which can lead to a new way of living.And what allows our cultural, political and biological spheres to survive over time is the ability to make decisions to completely reorganize in order to survive.But it must go through the decision: there is always a moment, during a crisis, where you have to decide whether to save something that is undermined or if it is better to abandon it.But when do we decide that standards are no longer able to maintain security and normality?And above all, how to create new standards when we lost the old ones?

The old debate on the capacity or not of artificial intelligences to be creative is regularly revived.Do you think that one day, AI will be able to break with the standards that we have set for them in their computer program, and will be able to create new standards themselves?

David Bates

« On peut imaginer des machines conçues pour ouvrir les problèmes plutôt que les résoudre »

Today, computers can understand and adapt in a very sophisticated way.But there is still a time when all the tools that a machine has unable to manage a new situation.In the AI world, we start from the principle that machine learning will always be able to adapt to the new data that will be brought to it, that it will simply adapt its ability to predict the future.This is equivalent to saying that we do not believe in the concept of crisis!However, crises are essential in all areas, so it would be interesting to look into the question ...

If we transpose this idea of the crisis as an opportunity for change to the technological field, it seems to mean that the more we try to make our machines infallible, the less they are likely to become really intelligent ...

David Bates

Exactly.The more the systems seem "very wild", the more we move away from their ability to take advantage of the error and to use it internally as a path to something radically new.But is it really useful to think of digital tools with a survival instinct?The problem is that the model that the brain is a computer and the computer should imitate the brain is so powerful that we always believe that the purpose of an AI is to be intelligent like a human.Suddenly, we are blocked on the idea that "if computers become too intelligent, we must find a way to control the machine".But it does not make sense to seek to have a machine that imitates humans since there is no human in itself: human is defined by their relationship to the three independent areas that I described everything totime.How could a machine imitate him when he himself needs machines to be what he is?

So rather than being beneficial to the machines, the occurrence of a crisis in machines could be useful to humans?

David Bates

If we try to constantly improve our machines, they impose humans standards on which we have no control, in the same way that we do not really control the way in which cultural, political and social norms model our behavior asvery fine.

Many researchers think it would be enough from an ethics institute in artificial intelligence to force GAFAM and make sure that our children have no iPhone too early.But what they do not take into account is that our brains are being transformed by technologies, as has always been the case: since the beginnings of writing, we transmit our ideas viabooks that are artificial memories.But there it is no longer only a question of reading a book: our minds are based in passing in digital infrastructure without being able to escape it.So when we wonder what decision to make, our minds are in a way "disabilities", precisely because digital technologies are based on normativity, which is supposedly "the best for us".Our brains are incorporated into a system that does not allow failure, rupture, and therefore opening to another system.

We would need digital technologies that encourage interruption, novelty, singularity, and push us to take over our decision -making power.Our infrastructure is so dominated by the capitalist logic of profit and homogeneity that these notions seem to go against the essence of digital, but it seems to me possible to imagine machines designed to open the problems rather thanpermanently resolve.In the meantime, we have automation, less transparency, and a desire to control the future, which is equivalent to destroying the future by doing only an extension of what we are already.

Can machines integrate the concept of future into their system?

David Bates

Most AI models are based on prediction, but prediction is not anticipation or imagination that humans use the future.For us, the future must be open, capable of translating into unpredictable things.We must not think of the future as something good or evil, but as something to imagine collectively.Possible future must be the cause of decisions, and we need to create tools to help this.Now today, which is criticized for digital technologies, which they often tend to reintroduce what is "normal" in times of crisis: we can order on Amazon, have food delivered ... until you find yourselfAll live the same way.These digital technologies prevent us from staying open to other future.

Mathilde Simon- 12 octobre 2021Share on FacebookShare on Twitter
Tags: