When should artificial intelligence be a beneficial intelligence?

artificial intelligence
Does artificial intelligence see the "bicycle" or the "green line" in this image? In fact, it is not good to say it. Photo / illustration: Eivind Torgersen / UiO

Views: 158

Can we be sure that a self-taught artificial intelligence really “sees” what we want it to see?

Courtesy from Titan UiO by Eivind Torgersen – Now every 20 minutes, a small bus without a driver runs along the harbor promenade in Oslo. With a maximum speed of 25 kilometers per hour, it transports passengers from Vippetangen to Kontraskjæret.
Driverless cars are perhaps the most obvious thing to think about when it comes to artificial intelligence (AI). Both when we talk about what is good about it, and when we talk about what can go wrong without people behind the wheel.
Cars, of course, are taught to ride in traffic, but the most revolutionary thing is that they learn while driving and, therefore, become better drivers.

artificial intelligence
The Mads driverless bus runs without a driver on the waterfront of the Oslo harbor, from Vippetangen to Kontraskjæret. Photo: Eivind Torgersen / UiO

Artificial intelligence in the hospital and doctor’s office.

Far from any artificial intelligence as visible as a car that could be found in traffic. Not only will it be able to replace people.

An artificial intelligence and self-learning can also replace the traditional data algorithms, for example, in the MRI scanners that create images of your brain.

Last year, the United States Food and Drug Administration (FDA) approved the use of artificial intelligence to reveal the disorder of diabetic retinopathy. This is a disease of the retina that affects diabetics.

– This is revolutionary, says the mathematician Anders Hansen.
– This means that now this can be used in a commercial context.

Mr. Hansen is a Professor at the University of Oslo, but spends most of his time at the University of Cambridge, where he runs his own research group on artificial intelligence.

Then, diabetic patients can come to the clinic and take a photograph of the eyeball. But it is not a doctor or optician who analyzes the image. It is an artificial intelligence algorithm that emits a response based on what you have learned to search.

Either the cornea is healthy or affected by diabetic retinopathy.

“He does it better than me, and I’m a very experienced cornea specialist,” inventor Michael Abramhoff told the online newspaper NPR.

I do not know what artificial intelligence does.

The artificial intelligence that evaluates the images of the diabetic’s eyes is accurate enough for the FDA to think that it is okay. That is, it does at least as well as a human evaluation.

But artificial intelligence can also behave in a way that a human being would never do.

“We really do not know what artificial intelligence does,” says Anders Hansen.

Their research is about showing what they really do.
– And it may seem that they are capturing things completely different from what the man identifies.

Deep learning

This is because artificial intelligence differs from a traditional data algorithm.
The latter is simply a recipe that delivers a response when it receives information. An answer that is completely predetermined and depends on what you have told it to do.

artificial intelligence
Prof. Anders Hansen Mathematics Departament

An artificial intelligence is also an algorithm, but much more advanced.

– What is different from an artificial intelligence algorithm is to learn from previous experience. Based on this learning, he will take a ticket and do something else, says Hansen.

First he goes through an intensive training camp. Let’s hope that there you learn what you want it to be.

– When the learning is done, one has a finished algorithm. This algorithm can learn more and more, but the first learning is the most important. That’s the big difference, says Hansen.

Since 2012, what is known as deep learning is the dominant method of teaching artificial intelligence. And there, in this depth, it is not so easy to know what is happening.

“We do not really know how the learning mechanism develops in artificial intelligence, and that is what we are trying to discover,” says Hansen.
It is not necessarily so easy.

Place for problems

Hansen and his colleagues put artificial intelligences into relatively simple samples to see how learning objects can be. Sometimes they come with rather strange answers, such as when to recognize different animals or when black and white images are shown with horizontal and vertical stripes.
A slightly more complicated variant is an artificial intelligence that has been trained, that has learned, to determine if a mole is harmless or if it can be malignant. Only analysing an image.

Then it is very important that you really read the image correctly. The same is absolutely necessary in another field of artificial intelligence development, the recognition of voice. If you are going to get a sensible response from Peter or Alexa on your mobile phone, you must first understand what you are really asking them to do.

The tests show that it is artificial

The intelligence of the mole is quite correct. The strange thing is that it is very safe even when it is wrong. And what seems quite arbitrary exactly why and when is wrong.

Now there is an artificial intelligence that can assess how dangerous a stain is. But sometimes it also goes wrong. Photo by illustration: Rufar / Colourbox

When the scientists leaned only a little in the image, the artificial intelligence could change its mind. From having a lot of confidence that the stain was harmless, until being very sure otherwise.

“We do not understand very well what he has managed to recognize, because the image is exactly the same,” says Hansen.

– Our thesis is that there is a false structure in the image. Something that is in the image and that man can not perceive, but that the machine perceives and finds, he says.

How do we know what it shows?

Such a false structure will not be detected by a human eye.

– For the machine, the image is none other than the numbers that come, explains Hansen.

Therefore, it is not easy to discover possible errors for us who are reading images in a completely different way.

But to show what the problem with false structures is, we still have to use an example that we can also see.

Suppose you learn a foreign language and see a collection of images that are accompanied by the words of this language for “bicycle.” All the images show bicycles in different varieties, but they are also equipped with a green line.

Using these images, you will learn the word “bicycle” in a foreign language. But, what if you think that the word means “green line”? Photo: Eivind Torgersen / UiO

After this training process, you, or artificial intelligence, are omitted from similar images and respond every time it is “bicycle”.

But can you be sure that it is the bicycle that will be recognized again? What happens if the artificial intelligence joins the word to “horizontal green line”. Or simply “green line”. Or simply “stroke.” Or something completely different, even.

The green line is an example of a false structure. There may also be similar ones in the images of the bicycle, things that are not visible to us, but that would capture an artificial intelligence.

There is something like this that Hansen and his colleagues believe is hidden in the images of the mole. Something that makes artificial intelligence change the perception.

This is just one of the many examples. Hansen points out that artificial intelligence seems to make simple mistakes. More humans would never do it.

“It’s pretty obvious that artificial intelligence” thinks “in a completely different way to the human brain,” says Hansen.

Environment of Artificial Inteligência divided

Since artificial intelligence is so different, nonhuman behavior can also have other unfortunate consequences. For example, you can be deceived.

This can be exploited by scammers, for example, in health care or in insurance matters.

Such uncertainties, according to Hansen, have led to a large division in the research community that pursues artificial intelligence. Some have argued that machine learning is currently comparable to medieval alchemy. Something that has made deadly insults to others.

Anders Hansen is among those who ask for caution.

– Now we have to think about this before introducing it on a large scale, he says. – If we can detect errors in very simple images, how will it be in more complex images?

It does not mean that it will stop or slow down research on artificial intelligence. Think that it is not possible.

For example, the FDA decision on diabetes shows that artificial intelligence is in full development in health services.

– The FDA is serious. By approving the use of artificial intelligence, it is based on adequate scientific research. In the same way as a medicine that goes on the market, says Hansen.

The mole test mentioned above can also be considered as beneficial to society because it is at least as accurate as a human being. Although it also fails sometimes. And even if you do not completely understand why it’s wrong.

– Then one has found a simple recognition algorithm that works well enough for practical purposes, even if it is not close to what a person can do.

It can happen tomorrow or in 500 years.

The road ahead is determined not only by mathematicians and programmers. It’s about politics, privacy and business interests.

– The US military is spending huge sums on this now.
“Artificial intelligence is definitely here to stay,” says Hansen.

But he does not dare to predict how fast or in what direction the road will go.

– We do not know if there will be a breakthrough tomorrow or if we have to wait 500 years for the next step.

Contact person: Anders Hansen, professor of the Department of Mathematics

Related Article: Nokia builds strategic ecosystem partnership to bring 5G networks and IoT in Japan

 

Be the first to comment

Leave a Reply

Your email address will not be published.


*