Thursday , April 18 2024
Kwork.ru - услуги фрилансеров от 500 руб.
Home / Science and technology / What fears the father of artificial intelligence

What fears the father of artificial intelligence

Чего боится создатель искусственного интеллектаCreating AI, the engineer is going through, will not go if his child against him.

As people involved in artificial intelligence research, I often encounter the view that many people are afraid of AI and what it can become. It is not really surprising, when viewed from the standpoint of the history of humanity, while paying attention to what we are fed by the entertainment industry that people may be afraid of the cybernetic uprising, which forced us to live in an isolated area, and the other turned into a “Macrocephaly” human batteries.

Still, for me, looking at all these evolutionary computer models that I use in the development of AI, it is difficult to think about the fact that my harmless, clean as a tear baby create on my computer screen one day can turn into monsters futuristic dystopia. If you call me “destroyer of worlds” as once was sorry and talked about himself Oppenheimer after he led the program to create a nuclear bomb?

Kwork.ru - услуги фрилансеров от 500 руб.

Perhaps I would have accepted that honor, and maybe critics of my work still right? Maybe I really should stop avoiding the questions about what kind of fears in relation to artificial intelligence available to me as an expert in the field of AI?

The fear of unpredictability

What scared the developer of AI in his creation

The computer HAL 9000, which became the dream of science fiction Arthur Charles Clarke and brought to life by Director Stanley Kubrick in his film “Space Odyssey 2001” is a great example of a system that failed due to unforeseen circumstances.

In many complex systems, the Titanic, NASA space Shuttle and the Chernobyl nuclear power plant, engineers had to combine many components. Perhaps the architects of these systems was very aware of how each element works individually, but they are not well understood, how all these components will work together.

The result was a system which never were understood by their creators, which led to certain consequences. In each case the ship sank, two shuttles have exploded, and almost all of Europe and parts of Asia are faced with the problem of radioactive contamination – a set of relatively small problems, but by chance happened at the same time, created a catastrophic effect.

I can easily imagine how we, the creators of AI can come to a similar result. We take the latest developments and research in cognitice (cognitive science — approx. ed), translate them into computer algorithms and add it all into the existing system. We are trying to develop AI without fully understanding their own intelligence and consciousness.

Systems such as Watson from IBM or Alpha from Google, are artificial neural networks, with impressive computing capabilities and is able to cope with really challenging tasks. Only the consequences of error in their work, is the result of a loss in the intellectual game “Jeopardy!” or a missed opportunity to win another of the world’s best player in the desktop logical game.

These effects are not global in nature. In fact, the worst thing that can happen to people in this case is someone loses some money on rates.

However, the architecture of the AI getting harder, and the computer processes faster. The possibility of the AI over time will only increase. And already it will lead us to what we begin to impose on the AI more and more responsibility, despite the growing risks of unforeseen circumstances.

We are well aware that “mistakes are part of human nature”, so for us it will be just physically impossible to create a truly safe throughout the system.

Fear of misuse

Very concerned about unpredictable consequences in the AI that I’m developing, using the approach of the so-called neuroevolution. I can create a virtual environment and populate their digital creatures, giving them the “brain” of the team for solving problems of increasing difficulty.

Over time, the efficiency of solving the problems these creatures increases, evolyutsioniruet. Those who cope with the task better, all selected for reproduction, creating on their basis of a new generation. Over many generations, these digital create develop cognitive abilities.

For example, right now we are making the first steps in the development of machines to perform simple navigation tasks, making simple solutions or memorize pairs of bits of information. But soon we will achieve the development of machines that can perform more complex tasks and will be much more effective overall level of intelligence. Our final goal is to create intelligence at the human level.

During this evolution we will try to detect and fix all errors and problems. With each new generation of machines will be better able to cope with errors, compared to the previous. This will increase the chances that we will be able to identify all the unintended consequences in the simulations and eliminate them before they can be realized in the real world.

Another possibility, which gives the evolutionary method of development is the empowerment of artificial intelligence ethics. It is likely that such ethical and moral characteristics of a person, as a reliable and altruism are the result of our evolution and a factor in its continuation.

We can create an artificial environment and endow machines abilities that allow them to demonstrate kindness, honesty and empathy. This could be one way to ensure that we develop a more obedient servants than the ruthless killer robots. However, despite the fact that neuroevolution can reduce the level of unintended consequences in AI behavior, it cannot prevent the misuse of artificial intelligence.

As a scientist, I must abide by their obligations to truth and to report that have discovered within their experiments, regardless, I like their results or not. My task is not to determine what I like and what not. It is only important that I can publish my work.

Fear of wrong social priorities

Being a scholar doesn’t mean losing humanity. I have on some level to regain contact with their hopes and fears. Being morally and politically motivated person, I have to consider the potential implications of their work and its possible effect on society.

As scientists and as members of society, we still have not come to a clear idea of exactly what you want from AI, and what it should become in the end. This is partly, of course, due to the fact that we still do not fully understand its potential. But still we need to clearly understand and decide what we want to get really advanced artificial intelligence.

One of the biggest areas in which people pay attention in a conversation about AI, is employment. Robots already perform for us difficult physical work, for example, assembling and welding between parts of car bodies. But one day the day will come when robots will be tasked with a cognitive task, that is, they will charge what was previously considered an exclusively unique ability of the man himself. Self-driving cars will replace taxi drivers; self-managed aircraft will need pilots.

Instead of receiving medical care in emergency rooms, always filled with tired staff and doctors, patients will be able to conduct surveys and to know the diagnosis through expert systems with immediate access to all medical knowledge. Surgery will be impervious to fatigue by the robots, is “quite an arm.”

Legal advice can be obtained from the comprehensive legal framework. For advice on investment will appeal to the expert system in market forecasting. Perhaps one day all human work would be done by machines. Even my work can be done faster through the use of a large number of machines, constantly investigating how to make machines more intelligent.

In the realities of our current society automation is already causing people to leave their jobs, making the rich owners of such automated machines richer and the rest poorer. But this is not a scientific problem. It is a political and socio-economic problem that should be solved by society itself.

My research won’t change, but my political principles, together with humanity, might lead to circumstances in which AI can become a extremely useful feature, instead of making the gap between the one percent global elite and the rest of us even wider.

The fear of a catastrophic scenario

We got to the last fear, we imposed the insane HAL 9000, Terminator, and any other evil superintelligence. If the AI will continue to evolve until, until you exceed human intelligence, whether artificial swarmintelligence system (or set of systems) to consider man as a useless material? How can we justify its existence in the face of superintelligence capable of doing and create something that will not be able to no one? If we can avoid the fate of being razed to the Ground by the machines, which we helped to create?

So the most important question in such circumstances would be: why do we need artificial superintelligence?

Had such a situation, I probably would have said I was a good man, who even contributed to the creation of this superintelligence, to which are now. I would have appealed to his compassion and empathy to superintelligence left me such compassionate and empathetic, alive. I would also add that diversity itself has value and the universe is so large that the existence of the human species it is actually very low.

But I can’t speak for all of mankind, so for all of us me is difficult to find a compelling argument. Just when I look at all of us, I do see that a lot of things we have done and are doing wrong. In the world hate each other. We go to war with each other. We distribute the food unfairly, knowledge, and medical care. We pollute the planet. In this world, of course, there are a lot of good things, but if you look at all those bad things that we have created and continue to create, will be very difficult to find argument in support of our continued existence.

Fortunately, we don’t have to justify their existence. We still have time. From 50 to 250 years, depending on how quickly will develop artificial intelligence. We, as a species, are able to all get together and find a good answer to the question, why superintelligence will not have to wipe us off the face of the planet.

It will be very difficult to resolve this issue. After all, to say that we support diversity and diversity and doing it are two completely different things. Like to say that we want to save the planet, and successfully to cope with it.

We all, whether each individual person or society as a whole must prepare for a catastrophic scenario, using that time to be ready to show and prove why our creations should allow us to continue to exist. Or we can just continue to blindly believe that such a development is impossible, and just stop talking on this topic.

However, regardless of the physical danger can be for us a superintelligence, we should not forget that danger he will have political as well as economic. If we can’t find a way to increase our standard of living, then eventually just popitem capitalism a laborer artificial intelligence, which will serve only a handful of the elite, possessing all the means of production.

© 2017 – 2019, paradox. All rights reserved.

Check Also

As the Chinese are manually cut down these caves?

In 1992, Wu anai, incredibly curious a resident of the Chinese village of Lunjiao, collected …

Leave a Reply

Your email address will not be published. Required fields are marked *