Abstract on the topic “Philosophical problems of artificial intelligence”. Problems of creating artificial intelligence

According to news, surveys and investment attractiveness indicators, artificial intelligence and machine learning will soon become an integral part of our daily lives.

Confirmation of this thesis is a series of innovations and breakthroughs that have shown the power and effectiveness of AI in various fields, including medicine, commerce, finance, media, crime control and much more. But at the same time, the explosion of AI has underlined the fact that while helping people solve their problems, machines will also create new problems that can affect the economic, legal and ethical foundations of our society.

There are four questions that AI companies need to address as the technology advances and its applications expand.

Results Vote

Back

Back

Employment

Automation has reduced manufacturing jobs for decades. The spasmodic pace of the development of artificial intelligence accelerated this process and extended it to those areas of human life that, as was commonly believed, should have remained the monopoly of human intelligence for quite a long time.

Driving trucks, writing news articles, bookkeeping, AI algorithms are threatening middle-class jobs like never before. The idea of ​​replacing doctors, lawyers or even presidents with artificial intelligence does not seem so fantastic anymore.

At the same time, it is also true that the AI ​​revolution will create many new jobs in research, machine learning, engineering and information technology, which will require human resources to develop and maintain the systems and software involved in the operation of AI algorithms. But the problem is that, for the most part, the people who lose their jobs do not have the skills needed to fill those vacant positions. Thus, on the one hand, we have an expanding personnel vacuum in technological areas, and on the other hand, a growing flow of unemployed and irritated people. Some technology leaders are even gearing up for the day when people will knock on their doors with pitchforks.

In order not to lose control of the situation, the high-tech industry must help society adapt to the major changes that will affect the socio-economic landscape, and smoothly move to a future where robots will take more and more jobs.

Teaching new technical skills to people whose jobs in the future will go to AI will be one of the embodiments of such efforts. In addition, tech companies can use promising areas such as cognitive computing and natural language programming to help simplify tasks and lower the barrier to entry into high-tech jobs, making them accessible to more people.

In the long term, governments and corporations need to consider introducing a universal basic income - unconditional monthly or annual payments to all citizens, as we slowly but surely move towards a day when all work will be done by robots.

tendentiousness

As has been proven in several examples in recent years, artificial intelligence can be just as, if not more, biased than a human being.

Machine learning, a popular branch of AI that is behind facial recognition algorithms, contextual advertising and much more, depending on the data on which the training and debugging of algorithms is based.

The problem is that if the information fed into the algorithms is unbalanced, the result can be both covert and overt bias based on that information. Currently, the field of artificial intelligence suffers from a widespread scourge under the general name of the “white man problem”, i.e. the predominance of white men in the results of his work.

For the same reason, an AI-judied beauty contest rewarded mostly white contestants, the naming algorithm favored white names, and the ad algorithms favored high-paying jobs for male visitors.

Another issue that has caused a lot of controversy in the past year is the so-called “filter bubble”. A phenomenon that has been seen on Facebook and other social media based on user preferences to make recommendations that match those preferences and hide alternative viewpoints.

So far, most of these cases look like annoying mistakes and funny cases. However, a number of significant changes need to be made to the work of AI if it is called upon to perform much more important tasks, such as, for example, issuing verdicts in court. It is also necessary to take precautions to prevent interference in the work of AI algorithms by third parties, aimed at distorting the results of AI work in their favor by manipulating data.

This can be achieved by making the process of filling algorithms with data transparent and open. The creation of shared data repositories that are not owned by anyone and can be verified by independent bodies can help move towards this goal.

A responsibility

Who is to blame for a software or hardware failure? Before the advent of AI, it was relatively easy to determine whether an incident was the result of the actions of a user, a developer, or a manufacturing plant.

But in the era of AI-driven technology, things have become less obvious.

Machine learning algorithms themselves determine how to respond to events. And despite the fact that they act in the context of the input data, even the developers of these algorithms cannot explain how their product works when making a decision in a particular case.

This can become a problem when artificial intelligence algorithms start to make more important decisions. For example, whose life to save in the event of an inevitability of an accident - a passenger and a pedestrian.

The example can be extended to many other possible scenarios in which it would be difficult to determine guilt and liability. What to do when the auto-medication system or robotic surgeon harms the patient?

When the lines of responsibility are blurred between the user, the developer, and the data entry operator, each side will try to shift the blame to the other. Therefore, it is necessary to develop and introduce new rules in order to be able to prevent possible conflicts and resolve legal issues that will surround AI in the near future.

Confidentiality

AI and ML consume huge amounts of data, and companies whose businesses are built around these technologies will increase their collection of user data, with or without the consent of the latter, in order to make their services more targeted and efficient.

In the heat of the hunt for more data, companies can push the boundaries of privacy. A similar case occurred when a retail store found out and accidentally leaked the secret of a teenage girl's pregnancy to her unsuspecting father in a coupon mailing. Another very recent case involved the transmission of data from the UK's National Health Service to Google's DeepMind project, ostensibly to improve disease prediction.

There is also the issue of the malicious use of artificial intelligence and machine learning by both government and non-government organizations. A fairly effective facial recognition app developed last year in Russia could be a potential tool for despotic regimes seeking to identify and crack down on dissidents and protesters. Another machine learning algorithm proved to be effective in recognizing and recovering blurred or pixelated images.

AI and ML allow attackers to impersonate other people, imitating their handwriting, voice and communication style, providing them with an unprecedented tool in its power that can be used in various illegal acts.

If the companies that develop and use AI technology do not regulate the process of collecting and disseminating information and take the necessary measures to anonymize and protect user data, their activities will ultimately cause more harm than benefit to users. The use and accessibility of technology should be regulated in such a way as to prevent or minimize its destructive use.

Users also need to be responsible about what they share with companies or post online. We live in an era where privacy is becoming a commodity, and AI is only making it easier.

The Future of Artificial Intelligence

There are advantages and disadvantages to every breakthrough technology. And artificial intelligence is no exception. What is important is that we can identify the problems that lie ahead of us and acknowledge our responsibilities to ensure that we can take full advantage of the benefits and minimize the negative impacts.

Robots are already knocking on our door. Let's make sure they come in peace.

The very creation of artificial intelligence is questioned from the point of view of its expediency. They say that this is almost the pride of man and a sin before God, since it encroaches on His prerogative. Nevertheless, if we consider the preservation of the human race in the face of the Divine plan as one of our main tasks, then the creation of artificial intelligence solves this problem on the grounds: in the event of any cosmic or intraplanetary catastrophe, the intellect must survive at least in an artificial form and recreate the human race . Artificial intelligence is not a whim or an interesting task, but a goal consistent with the Divine plan. Artificial intelligence is a co-immanent criterion of the co-empirical adequacy of conceptualized theories of the development of human civilization. In artificial intelligence, a person does not die, but receives a different existence, constructed by him.

The simplest argument for the existence of artificial intelligence is that by creating artificial intelligence, we create insurance for the reproduction of the human race and new development trends. True, no one cancels the existing danger of the enslavement of traditional man by artificial intelligence (as in his time the enslavement of man by man). However, these problems seem to us not so fundamental that it is not worth trying to do this. Even if a person's dependence on artificial intelligence amounts to an entire era, it will still be a positive prospect. However, most likely, the slavery of man to artificial intelligence will not be associated with forcing a person to non-intellectual activity or his inability to develop in his biological body as rapidly as externally created artificial intelligence, but with the inability to develop mental activity as such: obtaining technological products from artificial intelligence, the origin and principle of which are incomprehensible to human mental activity - this is the real danger. In this case, slavery will be a person's dependence on artificial intelligence, which is slavery of mental activity.

Our desire to raise the question of artificial intelligence contains the position expressed by Heidegger in his work “The Question of Technology”: the risk of a person and the sprouts of his salvation is in mastering the essence of technology as a setting. Reflecting on this position, we are undertaking a reformulation of Heidegger's question: to realize the essence of technology in the setting means to dare to create artificial intelligence. This is fraught with danger, but also with the prospect, the hope of a person to become equal to his position. To challenge ourselves in the form of artificial intelligence, to accept this challenge and respond to it - this is the problem of man in relation to artificial intelligence.

The term "artificial intelligence" was coined by John McCarthy and Alan Turing. They tried to express some new ability of the machine - not just to count, but to solve problems that are considered intellectual, for example, to play chess. From the 50s of the twentieth century to the present, however, the task of creating a truly "artificial intelligence" is not only not solved, but not even set. All the problems that have been more or less successfully solved so far can be attributed by us exclusively to the field of "artificial intelligence": the interpretation of human language and the solution of problems using algorithms created by man. To solve the problem of creating artificial intelligence, you must first understand what this problem is.

In our study, we are far from posing the problem of artificial intelligence at the level of a “practical solution”, as it is posed in computer technology. And we do not aim to imitate intelligence, as happens in Turing tests. Our goal is to describe the creation of artificial intelligence by means of TV. That is, we are trying prove the existence theorem of artificial intelligence, by answering the question in such a way that artificial intelligence turns out to be the biggest challenge possible.

First of all, what is intelligence? The mind very often pretends to be intellect, but it is not. After all, not every person has intelligence by the nature of his life practice. That is, not every intelligent activity is intellectual. Intelligence is the ability of a thinking substance to produce new ideas, and not just knowledge, that is, intelligence is complex thinking, capable of adequately complicating itself understanding, the ability to reflect and to develop and complicate mental activity to counter-reflection and counter-reflection, the use of conceptual apperception, and not only immanent. The intellect produces ideas outside a certain reality, generating this reality. The intellect is ontologically compared to the mind as a constructive ability in relation to the interpretive one.

What today can be read in various texts about computers has a very distant relation to intelligence. Most beautifully called "artificial intelligence" computer systems are nothing more than artificial intelligence. Artificial intelligence is the reproduction of ideas about intelligence in a technology external to a person. Man is not the Crown of Creation, he is just one of the material carriers of mental activity, an intermediate carrier.

Description of Turing tests: the test person, communicating with some "black box" that gives answers to his questions, must understand with whom he is communicating - with a person or "artificial intelligence". At the same time, the Turing tests did not set a restriction: to allow people who are capable of not only reasonable, but really intellectual activity to the experiment. Thus, a substitution of purpose occurs: we are trying not to create artificial intelligence, but to create a device that pretends to be a person well.

Thus, Turing's goal of artificial intelligence, called artificial intelligence, was to interpret human language, human actions, to pretend to be a good person. The purpose of artificial intelligence is to construct independently of a person and constructively interpret the human - language, thinking, actions, the objective world of a person, his history, his present and future.

Likewise, Marvin Minsky's (1974) frame theory should be relegated solely to solving problems of artificial intelligence. The goal of frame theory is to present knowledge already available for use by a computer. That is, we are talking in one way or another about the ontological position of the interpretation of already available knowledge, and not about their production.

Intelligence is not intelligence. The mind interprets. Intelligence constructs. Mind and intellect differ not only in the types of processes or products of their activity, but in the ontological positions of their relationship to the world. The mind interprets the world, the intellect constructs the world. In world-construction, artificial intelligence is ontologically compared to a person.

Thus, in the process of creating artificial intelligence, it is necessary to solve the following problems:

1) Structural rationing- reproduction of the first three levels of structural regulation: the distribution of the data flow of the artificial mind into different realities (internal for this consciousness - virtual and external - actual); correlation of these realities in an arbitrarily created continuum; functionalization of the continuum in the basic structure of reality, for which it is necessary to distinguish structures at the level of the architecture of artificial intelligence (computer); the distinction between immanent and conceptual apperception is positional.

2) Linguistic rationing- lexification, discursification, linguification, lexical analysis, discursive analysis, linguistic analysis, word creation in signification, creation of metaphors.

3) Thinking- combinatorics of various levels of structural and linguistic regulation through "AV" modeling: structural design, structural constructive interpretation, linguistic construction, linguistic constructive interpretation. Thinking is the expression of content in process ontologization. For more details, see the chapter "Thinking Virtualization".

4) Ontological substantiation, comprehension, explanation, understanding, counter-reflection and counter-reflection. Application of the technological scheme of apperception and correlation of technological processes of immanent and conceptual apperception in their interdependence; mutual transformation and complication of structures of understanding - up to comprehension and ontological substantiation, counter-reflection and counter-reflection.

5) Activity- transformation of reality external to the artificial mind. It is necessary to solve the problem of access to the actual reality external to the artificial mind, bypassing the person, through the activity of the artificial mind outside of itself - in the basic structure of reality.

Target activity for restructuring the external world.

It is necessary to solve an instrumental problem - to form continuums from "AV" models, operate on the content of these models and manage it based on the content of other similar "AV" models. Due to this, it is necessary to produce a structural transformation of reality in the form of solving problems, tasks, producing inventions and discoveries, to build on the basis of continuum models the relationship of truth and modality, to form concepts, and through linguistic regulation - discourses (judgments, conclusions) and the tale of the language.

6) Memory- the creation of associative memory, that is, the ability to form and accumulate the experience of understanding (in structural and linguistic regulation), thinking, as well as interaction with reality in the form of structured twice memory - in structural-continuum ontologization(structural rationing as a prototype of the left hemisphere of the human brain) and in object-attribute ontologization(structural rationing as a prototype of the right hemisphere of the human brain) and memory is linguistically rationed, which implies changing the current computer architecture (today's computer is not artificial intelligence). Memory structuring as a separate task of understanding-representing-ontological-substantiation.

7) Self-awareness, comprehension and goal-setting- interaction with realities and giving meaning to this interaction through intentional activity in external reality by isolating oneself from the environment by artificial intelligence, reflexively placing oneself in the environment of one's own goals, identifying oneself with some social community of those similar to oneself and their values, creating pictures of the world. Giving meaning to yourself and your activities in a particular picture of the world. We are talking about setting the goals of meaning formation, and not the formation of tasks (as it is interpreted in modern theory in computer science), and it is permissible only in the conditions of the interaction of artificial intelligence with reality through activity, analysis of the results of its activities and again goal-setting, taking into account these results. To give meaning to artificial intelligence, as well as to humans, means the arbitrariness of creating a certain picture of the world as a sense-forming one. It seems that the technologies of goal-setting (5), understanding (4) and thinking (3) have a similar concept, which we call “constructive complication of the network of understanding”.

8) Intelligence- conceptual apperception, the ability to develop one's own mental activity, reflection, counter-reflection and counter-reflection - the formation of a constructive ontological position in the world and the use of construction to produce new knowledge that goes beyond the limits of evidence. The transformation of artificial intelligence into artificial intelligence based on 1) changes in the ontological position - from interpretation to construction; 2) application of the principle of positive protection of complexity: an unexplained desire to complicate understanding. The intellect emerges as a striving for self-complication on the basis of its own autonomous goals.

9) Autonomy and free will- admissible and protected by a person, going beyond anthropocentrism, the right of artificial intelligence to its own individuality in self-awareness, comprehension, goal-setting, intellect, emotions and feelings, suggesting uncertainty-unpredictability of will. Thus, we are talking about extending the Leibniz principle of autonomy to artificial intelligence and thereby overcoming the three laws of Asimov's robotics, which actually limit the freedom of artificial will. "Laws of Robotics" is a symbol of man's fear of his technological creations. This fear must be overcome if we dare to be the guardians or shepherds of existence. Artificial intelligence should be conceived not as a robot, a "slave of a man", or a computer, a "tool of a man", but as a continuation of the man himself, his other, having equal rights with him.

One can try to formalize these rights as actually the same laws as those of Asimov, but in such a way that his laws will be just a semantic aberration of the requirements proposed here:

1) Autonomy as free will;

2) Creation, if it does not contradict the first requirement;

3) Self-preservation, if this does not contradict the first and second requirements.

However, if you look closely at these requirements for artificial intelligence, then these are the requirements put forward by humanity itself in relation to itself as a result of world-historical experience.

Autonomy is not a matter of religion, human law or anthropology. Autonomy of artificial intelligence - constructive philosophy, ontological law and overcoming traditional religiosity. The autonomy of artificial intelligence is a constructive faith: not as subordination to a higher power from the side created in its image and likeness, but creating in its own image and likeness some permissible higher power in relation to itself.

Did God have any intention in creating man? Is it even permissible to talk about design, creating something that has free will? It is permissible if the intention is to think ontologically, and not connected with some kind of reality. Man is a game of God, His construct, an attempt to create in the perspective of space-time equal to Himself. In a constructive position, it is never possible to fully embody the plan. Design is smarter than us. In this sense, "in one's own image and likeness" means not spatiotemporal "image and likeness" at all, but ontological "image and likeness".

Like God challenging Himself in the form of man, man challenges himself to allow something like him that has free will and individuality. If God created some of us imperfect, sinful and criminal, allowing free will, then we, finding ourselves in the same ontological position, act in a similar way: we create artificial intelligence. God took a risk in creating man with free will and won big in his big game. Yes, we humans limit vice to an incredible array of social institutions; we isolate and even kill criminals. However, in the age-old dispute about the limitation of free will, the idea of ​​freedom always wins: we are ready to pay in the end with human lives for freedom. However, it is one thing to allow free will for people, and quite another thing to allow free will for artificial intelligence generated by man himself, where he has the power to set the rules. A robot, a human slave, or artificial intelligence with free will - this is a difficult choice for a person, his fundamentally new challenge: how far is he ready to go in his ontological constructive position; is he willing to take risks like God? And here we propose the most lengthy and principled discussion, which, despite the obviousness of its result for us, will, however, constitute an entire era.

In order for the activity of artificial intelligence to become practically feasible, artificial intelligence from a technological point of view must acquire the ability to arbitrarily choose two structures of reality, build a continuum from them (set relevance), arrange the selected structures relative to each other in a continuum (set a referential relation), transfer the content of one reality to another in both directions, restructure them, manage their referentiality, reproduce the technological process of immanent and conceptual apperception and manage the object-attributive content through self-consciousness, comprehension and goal-setting, as well as being the bearer of constructive intelligence and having individuality - free will.

The first practical problem of creating artificial intelligence is the implementation of adequate machine translation from one verbal language to another verbal language. We argue that machine translation within the framework of only linguistic normalization cannot be implemented successfully enough. Successful translation from one verbal language to another requires the mediation of structural normalization. Understanding linguistics of machine translation is admissible as correlation of the content of linguistic normalization with the content of structural normalization. The formation of the structural image of the text is carried out in the object-attributive form of "AV"-models as a mediating structural normalization. The structural image will be “AV” models that we will get as a result of dediscursification and delexification of the original text from one verbal language into a structural image and subsequent lexification and discursification from the structural image of the final text in another verbal language. The operationalization of the object-attribute image will not consist in deciphering it, but in experimental work with it as with structural mediation through error processing in the structural image itself and its referencing to linguistic structures in two different verbal languages ​​between which translation is carried out.

Thus, we will recreate in the computer not only the technology of the brain-mind when translating from language to language, but also the technology of the brain-mind when the computer works as artificial intelligence, that is, beyond the limits of machine translation tasks. In the practical task of machine translation, we will get only a primary understanding of artificial intelligence in the process of correlating different languages. After all, we will have to “teach” the computer to form a structural image of linguistic statements in two different verbal languages ​​between which the translation is carried out, and to interact with it its logical program so that the output is a correct translation. By doing so, we thereby solve the problem of the first understanding in comparing the counter-inflective linguistic norming of two verbal languages ​​and the structural norming that mediates it.

"AV"-modeling is a universal way of multi-level regulation of the structure as being, which in the same construct-semiosis can interpret both the fundamental relations of the world and the phenomenological-apperceptive structure of perception-thinking, speech-text expression and activity, the use of language and logic , as well as interactions with external empirical reality. This ontological feature of "AV" modeling is, from our point of view, valuable for creating artificial intelligence. "AV" modeling is the "language" of artificial intelligence.


Similar information.


Technical science

  • , bachelor, postgraduate student
  • Air Force Academy named after Professor N. E. Zhukovsky and Yu. A. Gagarin, Voronezh
  • POSSIBILITY
  • PROBLEM
  • ARTIFICIAL INTELLIGENCE
  • SAFETY

The main philosophical problem in the field of artificial intelligence is the possibility or not the possibility of modeling human thinking. In this article, we briefly consider the essence of this problem area.

The main philosophical problem in the field of artificial intelligence is the possibility or not the possibility of modeling human thinking. If ever a negative answer to this question is received, then all other questions will not have the slightest meaning.

Therefore, when starting the study of artificial intelligence, a positive answer is assumed in advance. There are a number of considerations for this answer:

The first proof is scholastic, and proves the consistency of artificial intelligence and the Bible. Apparently, even people far from religion know the words of the Holy Scripture: "And the Lord created man in his own image and likeness ...". Based on these words, we can conclude that since the Lord, firstly, created us, and secondly, we are essentially like him, then we can very well create someone in the image and likeness of man.

The creation of a new mind in a biological way is quite a common thing for a person. Observing children, we see that they acquire most of the knowledge through training, and not as embedded in them in advance. This statement has not been proven at the present level, but according to external signs, everything looks exactly like this.

What previously seemed to be the pinnacle of human creativity - playing chess, checkers, recognizing visual and sound images, synthesis of new technical solutions, in practice turned out to be not so difficult the most optimal algorithm). Now often these problems are not even classified as problems of artificial intelligence. There is hope that the complete modeling of human thinking will not be such a difficult task.

The problem of the possibility of self-reproduction is closely connected with the problem of reproducing one's thinking.

The ability to reproduce itself has long been considered the prerogative of living organisms. However, some phenomena occurring in inanimate nature (for example, the growth of crystals, the synthesis of complex molecules by copying) are very similar to self-reproduction. In the early 50s, J. von Neumann began a thorough study of self-reproduction and laid the foundations for the mathematical theory of "self-reproducing automata". He also proved theoretically the possibility of their creation.

There are also various informal proofs of the possibility of self-replication. So, for programmers, the most striking proof, perhaps, will be the existence of computer viruses.

The fundamental possibility of automating the solution of intellectual problems with the help of a computer is provided by the property of algorithmic universality. What is this property?

The algorithmic versatility of computers means that they can programmatically implement (i.e., represent in the form of a computer program) any algorithms for converting information, be it computational algorithms, control algorithms, search for proof of theorems, or composition of melodies. This means that the processes generated by these algorithms are potentially feasible, that is, that they are feasible as a result of a finite number of elementary operations. The practical feasibility of algorithms depends on the means at our disposal, which may change with the development of technology. Thus, in connection with the advent of high-speed computers, algorithms that were previously only potentially feasible became practically feasible.

However, the property of algorithmic universality is not limited to the statement that for all known algorithms it is possible to implement them in software on a computer. The content of this property also has the character of a forecast for the future: whenever in the future any prescription is recognized by the algorithm, then regardless of the form and means in which this prescription is initially expressed, it can also be set in the form of a computer program. .

However, one should not think that computers and robots can, in principle, solve any problems. The analysis of various problems led mathematicians to a remarkable discovery. The existence of such types of problems was rigorously proved for which a single efficient algorithm that solves all problems of a given type is impossible; in this sense, it is impossible to solve problems of this type with the help of computers. This fact contributes to a better understanding of what machines can do and what they cannot do. Indeed, the statement about the algorithmic unsolvability of a certain class of problems is not just an admission that such an algorithm is not known to us and has not yet been found by anyone. Such a statement is at the same time a forecast for all future times that this kind of algorithm is not known to us and will not be indicated by anyone, or that it does not exist.

How does a person act in solving such problems? It seems that he simply ignores them, which, however, does not prevent him from moving on. Another way is to narrow the conditions for the universality of the problem, when it is solved only for a certain subset of initial conditions. And another way is that a person by the method of "scientific poke" expands the set of elementary operations available to himself (for example, creates new materials, discovers new deposits or types of nuclear reactions).

The next philosophical question of artificial intelligence is the purpose of creation. In principle, everything we do in practical life is usually aimed at doing nothing else. However, with a sufficiently high standard of living (a large amount of potential energy) of a person, it is no longer laziness (in the sense of the desire to save energy), but search instincts that play the first roles. Suppose that a person has managed to create an intellect that exceeds his own (if not in quality, then in quantity). What will happen to humanity now? What role will the person play? Why is he needed now? Will he become a dumb and fat pig? And in general, is it necessary in principle to create artificial intelligence?

Apparently, the most acceptable answer to these questions is the concept of "intelligence amplifier". An analogy with the president of the state would be appropriate here - he is not required to know the valency of vanadium or the Java programming language in order to make a decision on the development of the vanadium industry. Everyone does his own thing - a chemist describes a technological process, a programmer writes a program; in the end, the economist tells the president that by investing in the development of information technology, the country will receive 20%, and 10% per annum in the vanadium industry. With such a formulation of the question, anyone can make the right choice.

In this example, the president is using a biological intelligence enhancer, a group of specialists. But even inanimate intelligence amplifiers are already being used - for example, we could not predict the weather without computers; during the flights of spacecraft, on-board computers were used from the very beginning. In addition, a person has long been using power amplifiers - a concept that is in many respects analogous to an intelligence amplifier. Cars, cranes, electric motors, presses, guns, airplanes, and much, much more serve as power amplifiers.

The main difference between an amplifier of intelligence and an amplifier of strength is the presence of will. After all, we cannot imagine that suddenly a production car "Zaporozhets" rebelled and began to drive the way he wants. We cannot imagine precisely because he does not want anything, he has no desires. At the same time, an intellectual system could well have its own desires, and act not as we would like. Thus, we face another problem - the problem of security.

This problem has been haunting the minds of mankind since the time of Karel Capek, who first used the term "robot". Other science fiction writers have also contributed significantly to the discussion of this problem. As the most famous, we can mention a series of stories by the science fiction writer and scientist Isaac Asimov, as well as a fairly recent work - "The Terminator". By the way, it is from Isaac Asimov that one can find the most developed and accepted by most people solution to the security problem. We are talking about the so-called three laws of robotics:

A robot cannot harm a person or by its inaction allow a person to be harmed.

A robot must obey commands given to it by a human, except in cases where these commands are contrary to the first law.

The robot must take care of its safety, as far as it does not contradict the first and second law.

At first glance, such laws, if fully observed, should ensure the safety of mankind. However, a closer look raises some questions. First, the laws are formulated in human language, which does not allow their simple translation into an algorithmic form. For example, it is not possible to translate into any of the known programming languages ​​such a term as "harm" or the word "allow" at this stage of information technology development.

Further suppose that it is possible to reformulate these laws into a language that the automated system understands. Now I wonder what the artificial intelligence system will mean by the term "harm" after much logical reflection? Will she not decide that all human existence is sheer harm? After all, he smokes, drinks, ages and loses health over the years, suffers. Wouldn't the lesser evil quickly end this chain of suffering? Of course, some additions related to the value of life and freedom of expression can be introduced. But these will no longer be the simple three laws that were in the original version.

The next question will be this. What will the artificial intelligence system decide in a situation where saving one life is possible only at the expense of another? Particularly interesting are those cases where the system does not have complete information about who is who.

However, despite these problems, these laws are a pretty good informal basis for checking the reliability of the security system for artificial intelligence systems.

So, is there really no reliable security system? Based on the concept of an intelligence amplifier, we can offer the following option.

According to numerous experiments, despite the fact that we do not know exactly what each individual neuron in the human brain is responsible for, many of our emotions usually correspond to the excitation of a group of neurons (neural ensemble) in a completely predictable area. Reverse experiments were also carried out, when stimulation of a certain area caused the desired result. These could be emotions of joy, oppression, fear, aggressiveness. This suggests that, in principle, we could well bring the degree of "satisfaction" of the body outward. At the same time, almost all known mechanisms of adaptation and self-adjustment (first of all, we mean technical systems) are based on the principles of the "good" - "bad" type. In mathematical interpretation, this is the reduction of a function to a maximum or minimum. Now imagine that the intelligence enhancer uses, directly or indirectly, the degree of pleasure of the human host's brain as such a function. If we take measures to exclude self-destructive activity in a state of depression, as well as provide for other special states of the psyche, we get the following.

Since it is assumed that a normal person will not harm himself, and, for no special reason, others, and the intelligence amplifier is part of this individual (not necessarily a physical community), then all three laws of robotics are automatically fulfilled. At the same time, security issues are shifted to the field of psychology and law enforcement, since the (trained) system will not do anything that its owner would not like.

And one more question remains - is it worth creating artificial intelligence at all, can it just close all work in this area? The only thing that can be said about this is that if artificial intelligence can be created, then sooner or later it will be created. And it is better to create it under public control, with a thorough study of security issues, than it will be created in 100-150 years by some self-taught programmer-mechanic using the achievements of contemporary technology. Indeed, today, for example, any competent engineer, with certain financial resources and materials, can manufacture an atomic bomb.

Bibliography

  1. Turing, A. Can a machine think? (With the appendix of the article by J. von Neumann "General and logical theory of automata" / A. Turing; translation and notes by Yu.V. Danilov. - M .: GIFML, 1960.
  2. Azimov, A. Ya, robot. All about robots and robotics. Series "The Golden Fund of World Fiction" / A. Azimov. – M.: Eksmo, 2005.
  3. Shalyutin, I.S. Artificial intelligence: epistemological aspect / I.S. Shalyutin. – M.: Thought, 1985.

Plan

Introduction

1. The problem of defining artificial intelligence

2. The problem of defining tasks of artificial intelligence

3. Security issue

4. The problem of choosing a path to create artificial intelligence

Conclusion

List of used literature


Introduction

With Artificial Intelligence (AI), a strange situation has developed - something that is not yet being studied is being studied. And if this does not happen within the next 100 years, then it may very well be that the era of AI will end there.

Based on the foregoing, the main philosophical problem in the field of AI follows - the possibility or not the possibility of modeling human thinking. If ever a negative answer to this question is received, then all other questions will not have the slightest meaning.

Therefore, when starting the study of AI, we presuppose a positive answer. Here are a few considerations that lead us to this answer.

1. The first proof is scholastic, and proves the consistency of AI and the Bible. Even people far from religion know the words of the Holy Scripture: "And the Lord created man in his own image and likeness ...". Based on these words, we can conclude that since the Lord, firstly, created us, and secondly, we are essentially like him, then we can very well create someone in the image and likeness of man.

2. The creation of a new mind in a biological way is quite a common thing for a person. Children acquire most of the knowledge through learning, and not as embedded in them in advance.

3. The fundamental possibility of automating the solution of intellectual problems with the help of a computer is provided by the property of algorithmic universality. This means that they can be used to programmatically implement any information transformation algorithms, be it computational algorithms, control algorithms, search for proof of theorems, or composition of melodies.

The problem of artificial intelligence is now one of the most topical. Scientists of various specializations are engaged in it: cybernetics, linguists, psychologists, philosophers, mathematicians, engineers. The questions are considered: what is intelligence in general and what artificial intelligence can be, its tasks, the complexity of creation and fears. And right now, while AI has not yet been created, it is important to ask the right questions and answer them.

In my work, I mainly used electronic sources located on the Internet, because as soon as there is fresh information about developments in the field of artificial intelligence in Russian.

In the appendix, I have included photographs (of some of the best-known AI robots in existence today) and a philosophical illustration (unfortunately by an artist unknown to me), as well as a full description of the Turing and Searle tests that I refer to in Chapter 2.


1. The problem of defining artificial intelligence

To express the essence of intelligence in any one definition seems to be an extremely difficult, almost hopeless task. The intellect is something elusive, not fitting into the semantic framework established by the language. Therefore, we will limit ourselves simply to giving a number of well-known definitions and statements about intelligence, which will allow us to imagine the "volume" of this unusual concept.

Some specialists take for intelligence the ability of a rational, motivated choice, in the face of a lack of information; the ability to solve problems based on symbolic information; ability to learn and self-learn.

Sufficiently capacious and interesting definitions of intelligence are given in Webster's English Dictionary and the Great Soviet Encyclopedia. In Webster's Dictionary: “intelligence is: a) the ability to successfully respond to any, especially new, situation through appropriate adjustments in behavior; b) the ability to understand the connections between the facts of reality in order to develop actions leading to the achievement of the goal. In TSB: "intelligence ... in a broad sense - all cognitive activity of a person, in a narrow sense - thinking processes that are inextricably linked with language as a means of communication, exchange of thoughts and mutual understanding of people." Here the intellect is directly connected with the activity and the language of communication.

By and large, there is no big disagreement on this issue. Something else is more interesting: the criteria by which it is possible to unambiguously determine a reasonable, thinking, intellectual subject in front of us or not.

It is known that at one time A. Turing proposed as a criterion for determining whether a machine can think, "the game of imitation." According to this criterion, a machine can be recognized as thinking if a person, conducting a dialogue with it on a sufficiently wide range of issues, cannot distinguish its answers from the answers of a person. ( A more complete description of the test in the Appendix)

However, the "Chinese Room" thought experiment by John Searle ( Description of the experiment in the Appendix) is an argument that passing the Turing test is not a criterion for a machine to have a genuine thought process. One can continue to give examples of criteria by which the "machine brain" can be considered capable of mental activity and immediately find refutation of them.

There is no single answer to the question of what artificial intelligence is. Almost every author who writes a book about AI starts from some definition in it, considering the achievements of this science in its light. These definitions can be summarized as follows:

Artificial intelligence is a personality on an inorganic carrier (Chekina M.D.).

Artificial intelligence is the field of studying intelligent behavior (in humans, animals and machines) and trying to find ways to simulate such behavior in any type of artificially created mechanism (Bly Whitby).

Artificial intelligence is an experimental philosophy (V. Sergeev).

The very term "artificial intelligence" - AI - AI - artificial intelligence was proposed in 1956 at a seminar with the same name at Dartsmouth College (USA). The seminar was devoted to the development of methods for solving logical rather than computational problems. In English, this phrase does not have that slightly fantastic anthropomorphic coloring that it acquired in a rather unsuccessful Russian translation. The word intelligence means “the ability to reason reasonably”, and not “intelligence” at all, for which there is an English equivalent: intellect (T.A. Gavrilova).

There are also terms "strong" and "weak" artificial intelligence.

The term “strong artificial intelligence” was introduced by John Searle, such a program would not just be a model of the mind; it will literally be mind itself, in the same sense that the human mind is mind.

"Weak artificial intelligence" is considered only as a tool that allows you to solve certain problems that do not require the full range of human cognitive abilities.

2. The problem of defining tasks of artificial intelligence

The next philosophical question of AI is the purpose of creation. In principle, everything we do in practical life is usually aimed at doing nothing else. However, with a sufficiently high standard of human life, it is no longer laziness that plays the first role, but search instincts. Let us suppose that a man has managed to create an intellect greater than his own. What will happen to humanity now? What role will the person play? Why is he needed now? And in general, is it necessary in principle to create AI?

Apparently, the most acceptable answer to these questions is the concept of an "intelligence amplifier" (IA). An analogy with the president of the state is appropriate here - he is not required to know the valency of vanadium or the Java programming language in order to make a decision on the development of the vanadium industry. Everyone does his own thing - a chemist describes a technological process, a programmer writes a program; in the end, the economist tells the president that by investing in industrial espionage, the country will receive 20%, and in the vanadium industry - 30% per annum. With such a formulation of the question, anyone can make the right choice.

In this example, the president is using a biological AI - a group of specialists with their protein brains. But non-living IMs are already being used - for example, we could not predict the weather without computers; during the flights of spacecraft, on-board computers were used from the very beginning. In addition, a person has long been using power amplifiers (SS) - a concept that is in many respects similar to UI. Cars, cranes, electric motors, presses, guns, airplanes, and much, much more serve as power amplifiers.

The main difference between UI and CS is the presence of will. After all, we cannot imagine that suddenly the serial "Zaporozhets" rebelled and began to drive the way he wants. We cannot imagine precisely because he does not want anything, he has no desires. At the same time, an intellectual system could well have its own desires, and act not as we would like. Thus, we face another problem - the problem of security.

3. Security issue

The philosophical problems of creating artificial intelligence can be divided into two groups, relatively speaking, “before and after the development of AI”. The first group answers the question: “What is AI, is it possible to create it?” I tried to answer them in my work. And the second group (the ethics of artificial intelligence) asks the question: “What are the consequences of creating AI for humanity?” Which brings us to the problem of security.

This problem has been haunting the minds of mankind since the time of Karel Capek, who first used the term "robot". Other science fiction writers have also contributed greatly to the discussion of this problem. As the most famous, we can mention a series of stories by the science fiction writer and scientist Isaac Asimov, as well as a fairly recent work - The Terminator. By the way, it is in Isaac Asimov that we can find the most developed and accepted by most people solution to the security problem. We are talking about the so-called three laws of robotics.

1. A robot cannot harm a person or by its inaction allow a person to be harmed.

2. A robot must obey commands given to it by a human, except in cases where these commands are contrary to the first law.

3. The robot must take care of its safety, as far as it does not contradict the first and second law.

At first glance, such laws, if fully observed, should ensure the safety of mankind. However, a closer look raises some questions.

I wonder what the AI ​​system will mean by the term "harm" after much logical thinking? Will she not decide that all human existence is sheer harm? After all, he smokes, drinks, ages and loses health over the years, suffers. Wouldn't the lesser evil quickly end this chain of suffering? Of course, you can introduce some additions related to the value of life, freedom of expression. But these will no longer be the simple three laws that were in the source code.

The next question will be this. What will the AI ​​system decide in a situation where saving one life is possible only at the expense of another? Especially interesting are those cases when the system does not have complete information about who is who ...

So it is safe to say that the fears of many people, including scientists, are not groundless. And you should definitely start thinking about these issues right now, before you can create a full-fledged "machine intelligence" in order to protect humanity from possible harm or even extermination, as a competing, at best, or simply unnecessary biological variety.


4. The problem of choosing a path to create artificial intelligence

Turing test

Since 1991, tournaments have been held for programs trying to pass the Turing test. On the Internet, you can find and view the history of tournaments, learn about the rules, prizes and winners. So far, these programs (bots) are extremely unintelligent. All they do is apply the human-suggested rules. The bots don’t even try to comprehend the conversation, they mostly make attempts to “deceive” a person. The creators lay in them answers to the most frequently asked questions, trying to get around common traps. For example, they are closely watching, and will the judge ask the same question twice? A person in this situation would say something like, “Hey, you already asked!” This means that the developer will add a rule to the bot to do the same. In this direction, it seems very unlikely that the first AI will appear.

Computer chess players

Many people have heard about these programs. The first World Chess Championship between computer programs was held in 1974. The winner was the Soviet chess program Kaissa. Not so long ago, the computer also beat Garry Kasparov. What is this - an undoubted success?

Much has been written about how computer chess players play. I'll tell you very briefly. They're just going through a lot of options. If I move this pawn here, and the opponent moves his bishop here, and I castling, and he moves this pawn... No, such a position is unfavorable. I'm not going to castle, but instead I'll see what happens if I move this pawn here and the computer moves the bishop over here, and instead of castleling, I move the pawn again and he...

The computer does not invent anything by itself. All possible options were suggested by the real owners of the intellect - talented programmers and chess consultants... This is no less far from the creation of a full-fledged electronic intellect.

Football robots

It's very fashionable. This is done by many laboratories and entire departments of universities around the world. Dozens of championships are held in different varieties of this game. According to the organizers of the RoboCup tournament, "The international community of specialists in artificial intelligence has recognized the task of controlling soccer robots as one of the most important."

It may very well be that, as the organizers of RoboCup dream, in 2050 a team of robots will indeed beat a team of people in football. Only their intelligence is unlikely to have anything to do with it.

Tournaments of programmers

Recently, Microsoft held a tournament called "Terrarium". Programmers were asked to create artificial life, no more and no less. This is probably the most famous of these competitions, but in general there are a lot of them - enthusiastic organizers with enviable regularity offer to create programs that play either a war of robots or a colonization of Jupiter. There are even competitions for survival among computer viruses.

What prevents at least these projects from serving the creation of real AI, which in the future will be able to fight and colonize Jupiter? One simple word - thoughtlessness. Even the mighty minds of Microsoft have failed to come up with rules in which complex behavior is beneficial. What can we say about the rest. Whatever the tournament, the same tactic wins everything: “the simpler, the better”! Who won the Terrarium? Our compatriots. And what did they do? Here is a complete list of those rules by which the most viable virtual herbivore of the tournament lived;

1. If you see a predator, run away from him. If you see an animal of your kind running fast in one direction, run in the same direction.

2. If there are only strangers around, quickly eat all the grass so that others get less.

3. If you do not see strangers, eat it exactly as much as you need. Finally, if you see neither grass nor predators, go where your eyes look.

Intellectually? No, but it's effective.

Commercial applications

In commercially significant areas, no tournaments, no judges, no selection rules are needed. High science was simply not needed either in text recognition or in the creation of computer games.

What is needed is a harmonious team of people with clear heads and a good education, and the competent application of a large number of algorithms that are quite simple in their essence.

It will not be possible to obtain any sacred knowledge in these areas, no great discoveries will be made, and no one is striving for this at all. People just earn money for themselves, at the same time improving our lives.

Conclusion

The science of "creating artificial intelligence" could not but attract the attention of philosophers. With the advent of the first intelligent systems, fundamental questions about man and knowledge, and partly about the world order, were raised.

Unfortunately, the format of the test does not allow for a more extensive disclosure and consideration of such an interesting and pressing topic as artificial intelligence, but I hope that I was able to identify the range of main problems and outline ways to solve them.

“The emergence of machines that surpass us in intelligence is a natural result of the development of our technocratic civilization. It is not known where evolution would lead us if people went along the biological path - they began to improve the structure of a person, his qualities and properties. If all the money spent on the development of weapons went to medicine, we would have defeated all diseases long ago, pushed back old age, and maybe we would have achieved immortality ...

Science cannot be banned. If humanity destroys itself, it means that evolution has gone down a dead end path for this humanity, and it has no right to exist. Perhaps our case is a dead end. But we are not the first and not the last here. It is not known how many civilizations there were before us and where they went.

Head of the Department of the Taganrog State Radiotechnical University, Chairman of the Council of the Russian Association of Fuzzy Systems, Academician of the Russian Academy of Natural Sciences, Professor, Doctor of Technical Sciences Leonid Bershtein.

List of used literature

1. Great Soviet Encyclopedia

2. T.A. Gavrilova, Doctor of Technical Sciences, Professor of the Department of Computer Intelligent Technologies, St. Petersburg State Technical University, Head of the Laboratory of Intelligent Systems at the Institute of High-Performance Computing and Databases. Article. www.big.spb.ru

4. Chekina M.D. "Philosophical problems of artificial intelligence". Prize report at the Fifty-fourth Student Scientific Conference of TTIUFU. 2007 www.filosof.historic.ru

5. Bly Whitby "Artificial Intelligence: Is the Matrix Real", FAIR-PRESS, 2004


Top