leblon: (farns)
[personal profile] leblon
Меня давно интересовал вопрос, совместима ли свобода воли с современной физической картиной мира. Этот вопрос часто возникает в дискуссиях атеистов и верующих, в качестве "козырного туза" последних: если они несовместимы, то поскольку свобода воли нам интуитивно очевидна, значит есть "дырка" в физической картине мира. Многие философы, однако, утверждают, что противоречия нет (это называется compatibilism). Недавно прочитал книгу видного компатибилиста Даниеля Деннетта Elbow Room на эту тему. Хочу поделиться своим пониманием, потому что мне Деннетт показался весьма убедительным. (У него есть более поздняя книга на эту же тему, Freedom Evolves, но я ее еще не читал). Чтобы не мучаться с переводом английских терминов на русский, буду писать по-английски. Ключевое слово здесь emergence, и я не знаю как его перевести на русский.

Let me begin with definitions. What is free will? According to Dennett, most philosophers distinguish actions which have been “caused” from actions done “for a reason”. The former ones have a cause (physical or physiological), while the latter ones have a goal of which the agent of the action is conscious. The ability to perform the latter kind of actions is what distinguishes agents with a free will from all others. The actions which have a cause often serve some goal too, at least from the point of view of an external rational observer, but if the agent is not aware of this goal, this does not count as a willed action. Say, insects are capable of performing complicated tasks which serve the goal of reproduction, but the program directing them to do these tasks is hard-wired, and they are not capable to reflecting on their own actions. Hence they do not have free will, according to this definition.

Do agents with a free will exist at all, or is it an illusion? The usual argument against free will is that everything in the physical world has a cause. If our brain is merely a collection of neurons, its behavior is determined by the genetic program which formed it, physiological stimuli, and the history of the organism, including all interactions with other agents around it. Therefore all of its responses have a cause, in principle, and the above distinction is incoherent. That is, the notion of free will is logically incoherent, and no distinction can be drawn between a human and, say, a robot with a complex program mimicking a human.

Dennett’s response to this argument is that rational acts are a special subset of all acts, and thus have both a cause and a reason. More precisely, Dennett says that free will is not an illusion, but neither is it fundamental. Rather, it is an “emergent” property. What does it mean? For a physicist the following analogy might help. The fundamental laws of nature do not have dissipation: they are “conservative”. Nevertheless, dissipation is not an illusion, and dissipative properties, like viscosity or conductivity, play a very prominent role in “higher-level” descriptions, e. g. in hydrodynamics. One says that dissipation, and in particular viscosity, are emergent properties. They describe coarse-grained behavior of systems with a large number of identical particles. The use of these notions is not mandatory, but it helps tremendously with understanding the properties of such systems, since a molecular-level description is often not very illuminating. Similarly, Dennett in effect says that free will is a convenient short-hand describing novel behaviors of sufficiently complicated agents.

From this point of view, the real problem is to understand why the intuition of free will is so universal among humans, what sort of agents it applies to, and what limitations this intuition has. In particular, what exactly distinguishes “rational” agents, such as humans, from “blind” agents, such as insects. Below I will sketch Dennett’s understanding and its implications for the problem of constructing an AI which can pass the Turing test.

Before proceeding, let me make a remark regarding the above definition of free will. This definition, while common among philosophers, does not agree with the everyday intuition about what free will is. Usually, free will is associated with a lack of predictability of actions by an agent. Consider, for example, Stanislaw Lem’s novel “The Mask”. In this novel, the protagonist is a sophisticated AI programmed to seek and kill a particular person. But despite the program, the AI develops an affection for her target (who is on the run), and when asked if she will kill the target when she catches up to it, answers “I don’t know”. Then the monk who asked the question declares that the AI is fundamentally no different from a human (“You are truly my sister.”) The reason that the AI is not capable to answer the question is that some of the impulses to action are generated unconsciously, and while the AI does not consciously want to kill the target, it cannot fully control these unconscious impulses. Because of this, the monk concluded that the AI possesses free will. From this viewpoint, to have free will an agent must have both conscious and unconscious goals. This seems to be different from the above “philosophical” definition. Indeed, imagine a completely rational agent all of whose motivations are conscious. Should one regard it as free or not? In this connection note that while the distinction between conscious and unconscious is very intuitive, its precise meaning is not at all obvious. Dennett addresses this second understanding of free will as well, but not very explicitly. I will comment on this below.

Let us now return to what distinguishes rational agents from all other agents. As a preliminary, we need to understand what an agent is. According to Dennett, it is best to approach this question from an evolutionary standpoint. That is, our best examples of agents are living organisms, and all definitions should be tested against these examples. At the most basic level, an agent is a system which is capable of maintaining its internal state despite changing environment. Bacteria, for example, are agents. Many of their functions are a way to achieve homeostasis. In a sense, one can say that a bacteria has a goal of maintaining its own homeostasis. It has other goals too, like reproduction, but this is less important. Say, a robot need not have to reproduce in order to be regarded as an agent, it only needs to be able to act to preserve its own integrity and functional ability. More complicated agents are capable of learning and even planning future actions. But this does not make them rational yet.

Consider now an agent which is not only capable of learning and planning but is also a social animal and can communicate with other similar animals. We will also assume that these animals are not clones of each other. They live in groups and benefit from cooperation, but since they are not genetically identical and have varying learning experiences, it is important for them to be able to model each other’s behavior based on the available evidence. Since these animals are able of complicated planning, modeling an agent’s behavior is virtually impossible without modeling its thought process. But if an agent is capable of modeling other similar agents, including aspects of their thought process, the same machinery can be used to model its own behavior and thought process. Such an agent is capable of creating complex scenarios of future events which incorporate its own responses. These pictures of alternate futures are part of our planning machinery. Having created them, we choose the course of actions which maximizes our “utility function”. From Dennett’s viewpoint, the ability to choose a course of actions based on such a deliberation is free will.

These models of the future are also what our mind interprets as “free will” in the colloquial sense, i. e. as unpredictability. This is really a misnomer, at least according to Dennett, because from his viewpoint a completely rational agent is predictable, but has free will in the “philosophical” sense. We now come to the definition of a rational agent. As far as I can tell, Dennett’s rational agent is an agent of the sort considered in the previous paragraph whose decision process is at least partially conscious. But what is a conscious process? Again, as far as I can tell, a conscious process is a brain process which can be communicated to another agent. The ability to communicate with others is thus regarded as an essential part of what makes us rational. From the evolutionary point of view, the ability to verbalize our reasons probably arose as a way to influence other individuals. Thus language and consciousness are closely related, and in fact the former is regarded as a pre-requisite for the latter. A completely rational agent is an agent who can communicate his whole decision process to another agent. We, of course, are only partially rational, and the unconscious part of the decision process, coupled with a conscious modeling of the future, is what creates the illusion of uncaused actions.

This proposal about the nature of free will has some practical implications. The most important one is probably in the area of AI. An AI which could be called a rational agent in the above sense has to have the following properties. (1) It should be able to communicate with other rational agents (us). (2) It should be able to plan its actions in a way which depends on the environmental factors, including information obtained from other rational agents. (3) It should be able to model the behavior and thought process of other rational agents, taking into account the information obtained during communication. (4) It should be able to create detailed scenarios of the future, including its own reaction to future events, and choose a course of action based on the results of this modeling and its utility function. Seems like a tall order, but not impossible.

(no subject)

Date: 2015-02-21 04:20 pm (UTC)
From: [identity profile] juan-gandhi.livejournal.com
All I see is the lack of definitions. Searle's book has the same problem.
Not sure about the author'd judgment regarding insects: how does he know? Guessing?

OTOH, I got a feeling that the so-called "rational" is considered to be strictly based on a set theory with AC. Would be irrational otherwise.

(no subject)

Date: 2015-02-21 04:37 pm (UTC)
From: [identity profile] leblon.livejournal.com
There is a definition of a rational agent here, but it is complicated and depends on what "conscious" means. If you accept that "conscious" means "able to communicate part of one's deliberative process to another agent", then one has a reasonable definition.

(no subject)

Date: 2015-03-19 05:19 pm (UTC)
From: [identity profile] gineer.livejournal.com
There is my version of definition of inteligence (actually not only that but also hypotesis of AI architecture) -- http://strong-ai.livejournal.com/15277.html

To state it short -- inteligance (as we mean it in common sense) is result of integral work of visual cortex and verbal.

(no subject)

Date: 2015-02-21 07:02 pm (UTC)
From: [identity profile] trurle.livejournal.com
Well, Dennett employs several terms without definition, like "communication" - communal insects obviously communicate, but ---.

(no subject)

Date: 2015-02-21 08:41 pm (UTC)
From: [identity profile] leblon.livejournal.com
This particular objection is addressed explicitly above, as well as in Dennett's writings. There are different levels of communication. Rational agents are capable of communicating their own reasoning (at least partially), not just the "final outcome". As far as we know, no insect is capable of that.

(no subject)

Date: 2015-02-21 08:47 pm (UTC)
From: [identity profile] trurle.livejournal.com
Rational agents are capable of communicating their own reasoning (at least partially),

Unfortunately, it is a loop of definition: the ability to express the reasoning define a rational agent and rationality is defined as ability to understand communicated reasoning.

(no subject)

Date: 2015-02-21 10:04 pm (UTC)
From: [identity profile] leblon.livejournal.com
No, there is no loop. There is just a terminological confusion, and I was imprecise. Every agent, whether rational or not, does some sort of "reasoning". Let us call it "deliberative process", as Dennett does, to avoid conflation with the word "reason" in the sense of "ratio". Deliberative process is simply the information processing which an agent goes through to decide on its course of action. A robot does it, an insect does it, a dog does it, and a human being does it. The difference between a rational agent and other agents is that a rational agent can communicate at least parts of its deliberative process to others and influence them in this way, and in turn can be influenced by such communications from other agents. In short, a rational agent "can be moved by reason".

(no subject)

Date: 2015-02-21 07:28 pm (UTC)
From: [identity profile] chaource.livejournal.com
1. Dennett is a philosopher. I met him at one time, and I didn't get a positive impression of his person or his philosophy. I got an impression that he just juggles words. What exactly does it mean empirically to have "free will"? What kind of experiment will possibly refute the hypothesis that we are "acting rationally" or "acting on free will" or whatever?

2. I believe that we still do not understand the nature of language and communication. We do not really know what it means to "communicate" something to someone (except in isolated subject domains where the scope of possible ideas is finite). We still do not understand what it means to "think" or to have "intelligence" or to "understand one's actions". There are some formal logics that try to describe this, but we are very far from even beginning to grasp this subject. Until this happens, there is no chance we can create something that we will ourselves believe to be a "real AI".

3. We will not be likely to believe that we have a "full AI" until it can behave and talk substantially as a human, including feelings, unconscious desires, psychological instability, and so on. For this, it appears that we need to fully understand the way a human brain works. To understand itself, a brain has to contain a fully explicit, "conscious" model of itself, which seems to be contradictory because a fully explicit model of some object would need to contain more information than the object being modeled. So it seems that the brain cannot fully understand its own workings. Therefore, we will never actually be able to design and create a "real AI".

4. If a "real AI" can be created only in an "emergent" sense, then the situation will be like this: We first create some kind of machine that is more primitive than a "full AI". This machine evolves itself in some unknown way, not explicitly programmed or designed by us, so that eventually the "full AI" capability "emerges" in the machine. Well, this machine is called a baby. We already can make that. This is presumably not what is meant when one hears of projects about "creating an AI".
Edited Date: 2015-02-21 07:36 pm (UTC)

(no subject)

Date: 2015-02-21 09:14 pm (UTC)
From: [identity profile] leblon.livejournal.com
1. Dennett's person has little to do with his arguments. As for his style of arguments, I did not get the impression he juggles words. Perhaps it is best to give a link to one of his interviews, where he tries to be as clear and direct as possible. http://www.salon.com/2014/12/28/the_truth_about_free_will_does_it_actually_exist/

2. Communication in general is simply transmission of information. Pretty straightforward. There are indeed different "levels" of communication, and Dennett discusses this at length. As for what "thinking" means, Dennett does not claim to understand it either. Neither does he say that creating an AI is easy. But this is not directly relevant to the question of free will.

3. A single brain cannot contain a model of itself. It also cannot contain a complete text of "War and Peace" or a complete proof of the four-color theorem. Luckily we have such things as writing and computers to help us organize and store large quantities of information.

4. Dennett does not say that an AI has to "emerge" in this sense. Emergence is used in a different sense in his work (the same sense as in the physics literature). It might turn out that an AI has to evolve and learn before it really turns into a full AI. Then we would not have a full control of its final capabilities and would not have a full understanding of its workings either. Designing such an AI would be a tremendous accomplishment nevertheless. (We have not designed babies, Mother Nature did it. So we cannot take credit here. Similarly, it would be a great accomplishment to design a self-replicating mechanism, even though we already have cells.)

(no subject)

Date: 2015-02-22 06:55 am (UTC)
From: [identity profile] chaource.livejournal.com
1. Whether I like Dennett personally or not, I was of course not trying to use an ad hominem argument against him. I just said my impression was not good.

I don't think he hever described an experiment that could in principle refute the hypothesis of free will, or the hypothesis that we act "with intention", or "rationally", or whatever. Also in the Salon article he is just chatting away. Without a clear decisive experiment, all this talk about "free will" stays firmly in the word-juggling domain.

2. Transmission of information, yes. If we could reduce our thought processes to information (i.e. to a formal logical description of some kind), we would be in business. But we don't know how to represent our thoughts fully in some formal logical system. So we don't know what it really means to have an "agent that can communicate its thought processes". That's all I'm saying.

I agree with Dennett's description of different levels of communication. Of course, if we had a neuron-by-neuron description of the brain, that wouldn't have helped us.

3. I agree with you here. We could in principle imagine that all the required information is stored in some external medium. However, I believe that the amount of information necessary for representing a human brain in a computer is going to remain beyond reach forever.

4. We already have some self-evolving simulations today ("artificial life") and also we have "machine learning". Obviously this is not enough of an achievement though.

My analogy with babies was actually intended to be more precise. Yes, we do not "design" babies in the same way as we design a computer. However, the amount of understanding that goes into selecting a human partner for procreation is staggering and certainly far more than the amount of information required for designing a computer. Also, the amount of information that would have to "emerge" in the transition to the "full AI" is vastly more than the amount of information that we might be able to "design". So, I think that in a very precise sense, we would be "designing" a full AI in the same sense as we are "designing" babies today. In other words, the difference between the AI that we would be able to design and the "full AI" is going to be so vast that it would not be appropriate to say that we actually designed it.
Edited Date: 2015-02-22 06:58 am (UTC)

(no subject)

Date: 2015-02-22 05:18 pm (UTC)
From: [identity profile] leblon.livejournal.com
Regarding 1+2, I think what Dennett does is trying to distill the meaning of free will. A good definition should be useful, in the sense that one should be able to test whether X has free will or not. I like his approach precisely because it is rather empirical. Namely, he proposes to define a rational agent (which he identifies with an agent possessing free will) as an agent with a complex planning facility which can communicate his reasoning process to other agents, and conversely be influenced by information about the thought process of other rational agents. Then to test if X is rational one has to check if one can affect X's behavior by communicating one's reasoning/thoughts to it, and whether X can communicate his reasoning/thoughts to you. I think this is pretty close to how I would try to test whether an agent is rational. Of course, this definition presupposes that we can communicate with X at all, but this is a separate problem, and it was addressed, for example, by people thinking about SETI.

(no subject)

Date: 2015-03-03 02:19 am (UTC)
From: [identity profile] chaource.livejournal.com
I do not accept this as an empirical definition of free will or rational agent. I explained above why I do not think that we have a clear enough, unambiguous definition of what it means to "Communicate one's thought process".

If we take this definition at face value, then obviously http://en.wikipedia.org/wiki/SHRDLU is a rational agent because its behavior (in a virtual world) can be "affected by communicating thoughts to it". SHRDLU can make rational decisions based on information (e.g. which block to lift or not to lift), and it can certainly "explain its reasoning" (e.g. it answers that it can't lift a certain block because of obstructions). So SHRDLU is a "rational agent" and has "free will" according to this "empirical" definition.

I'm certain that all of us (including Dennett) would not agree with this conclusion, though.

Generally, there is a pattern among philosophers when they talk about AI. First, they give a set of "definitions" or "criteria" that a computer should fulfill in order to be a "thinking agent", or to have "free will", or to have "real intelligence". Then, a computer program is presented that substantially fulfills these requirements. At this point philosophers promptly explain why this is not good enough and why the criteria for being "really intelligent" need to be changed.
Edited Date: 2015-03-03 02:22 am (UTC)

(no subject)

Date: 2015-04-14 12:57 am (UTC)
From: [identity profile] leblon.livejournal.com
SRDLU is a good example to think about, I did not know about it. Indeed, Dennet would not agree that it is a rational agent, and the reasons for this are explained in his book. I might have oversimplified Dennet's definition of a rational agent and more generally of an agent. His (not necessarily rational) agent always has some clearly defined "interests" or "goals". At the very least, the agent must be able to counteract some changes in its own environment to preserve its own integrity. A biological agent also has a goal to replicate. SHRDLU has no goals of its own, and it cannot "fend for itself". Already for this reason it is not an agent, much less a rational agent.

Another reason it is not a rational agent is that it cannot model other rational agents which communicate with it (that is, humans).

But this is a good example because it shows that Dennet's notion of a rational agent depends on the context: it depends on the complexity of the environment as well as the complexity of other agents around it. One can imagine a society of simple robots which operate in a simple SHRDLU-like environment, have hard-wired goals, an ability to communicate with other robots, transmit and interpret information from them, and some ability to learn (which makes these robots non-identical). Imagine also that their learning ability is developed enough to recognize that different robots have somewhat different patterns of behavior and take this into account (for example, ignoring information from robots which tend to "lie"). If the environment is simple, these robots would not be rational with respect to us (i.e. they would not be able to model us), but I would say they are rational with respect to each other. Similarly, to a society of highly intelligent beings humans could appear no more rational than these robots.

Sorry for not answering earlier.

(no subject)

Date: 2015-04-14 07:24 pm (UTC)
From: [identity profile] chaource.livejournal.com
Of course, SHRDLU cannot model other instances of itself, and does not have any goals that originate within it. However, the same pattern of philosophical arguments about AI is about to emerge here: Once a computer program is presented that fulfills the requirements for a "real AI" or a "thinking machine", promptly other requirements will come up. There are already some PhD theses from MIT that design programs that model their own thinking, that can correct their own previous mistakes in reasoning, that create complicate plans with goals and sub-goals, etc. MIT was very active in this line of AI research about 30-40 years ago. We can certainly write a computer program that models some aspects of the behavior of other programs, or communicates knowledge, or changes its behavior on the basis of received knowledge, or even talks in a natural language about its own reasoning (certainly, in a "restricted" domain, e.g. in the world of blocks as SHRDLU, but then the human world is also "restricted", and humans also can only talk about a finite set of things in the world).

However, this will not convince anyone. Human psychology strongly resists the idea that a machine can "really think". Philosophers will never accept the idea that any computer "really thinks", no matter what programs are presented to them.

I have a better explanation for this, and a better criterion for what it means to have a "really thinking machine" or a "really strong AI". The explanation for the resistance of philosophers is, briefly, that humans evolved to talk to other humans and to assume that other humans think in a similar way. There is no way to control humans as effectively and mechanically as we control a machine. However, real life forces us to talk to other humans and to attempt to influence their behavior. The way we do this is based on an intuitive grasp (modelling) of certain aspects of other people's behavior and thinking. This intuitive mechanism is activated emotionally whenever we see some creature that resembles a human or a sufficiently highly developed animal, whenever we see something that has a head, a face, etc., and that behaves in a somewhat complicated way. Let's call this mechanism the "recognition of an empirical human". Through this mechanism, people subconsciously identify pet animals and babies also as "empirical humans". So, human psychology will never accept that empirical humans are machines, or that machines are empirical humans. Machines can be controlled effectively and almost entirely mechanically; we know that it is not necessary to think about them in the same way as we think about humans. We know that an effective way of controlling a machine is to press buttons, etc., not to attempt to interact with it using our emotions and social intuitions. So psychologically we cannot accept a machine as an empirical human.

However, if we succeed in creating a machine whose behavior is so complex that we won't be able to control it except by engaging in human-like interactions, we will then instantly believe that it is an empirical human, and we will have to grant it human rights and responsibilities.
Edited Date: 2015-04-14 07:27 pm (UTC)

(no subject)

Date: 2015-04-14 07:35 pm (UTC)
From: [identity profile] chaource.livejournal.com
A real question in my view is, whether a machine should have "rights" and be "held responsible" for the consequences its behavior? This is not a theoretical philosophical question, but a very practical question about how best to control the behavior of the machine. When humans talk about "rights" and "responsibilities", they are trying to influence each other's behavior. The behavior of humans is so complex that it can't be simply guided by, say, giving humans a certain type of food or showing them a certain piece of information. When we talk about human "responsibility", we actually mean that a person's behavior will be influenced in a desired way if we tell this person that what they do will "have consequences", or that "it will be their own achievement", etc. This is the only effective way of influencing human behavior and achieving good results, as totalitarian societies have illustrated in the previous century. When and if we have a machine whose behavior is so complex that the only reasonable and effective way of influencing the machine would be to talk about how glorious it would be if the machine did something, and how we will all love the machine and talk about it and post its pictures on Instagram - only then everybody, including the philosophers, will agree that this machine is a "real AI", that it has "free will", a "moral status" equivalent to a human's, and so on. Then, I'm certain, philosophers will outdo each other in "proving" that it is this machine - unlike its predecessors - that actually has the correct "essential qualities" of a human.
Edited Date: 2015-04-14 07:35 pm (UTC)

(no subject)

Date: 2015-02-22 05:31 pm (UTC)
From: [identity profile] primaler.livejournal.com
"Эмергентность": ру вики (https://ru.wikipedia.org/wiki/%D0%AD%D0%BC%D0%B5%D1%80%D0%B4%D0%B6%D0%B5%D0%BD%D1%82%D0%BD%D0%BE%D1%81%D1%82%D1%8C)

Коряво, конечно, но как есть (заслуга, полагаю, Георгия "Синергетика" Малинецкого и ко).
Edited Date: 2015-02-22 05:34 pm (UTC)

(no subject)

Date: 2015-02-28 08:12 am (UTC)
From: [identity profile] Андрей Гаврилов (from livejournal.com)
>то поскольку свобода воли нам интуитивно очевидна

- вот тут ошибка и валяется. Из того, что кому-то что-то кажется "интуитивно очевидным" не следует, что оно верно. А уж приступать к исследованию вопроса, начав с посылки, прямо и однозначно дающей ответ на данный вопрос (не говоря уже о "не определив (и далее - не прочистив) понятия")...

Деннет же занимается любимейшим делом компатибилистов - переопределяет (ну, точнее _определяет_) понятие "свободы воли" так, чтобы вместо оксюморонного (если внимательно посмотреть) интуитивного "свобода поступить ровно в той же самой ситуации иначе [(потому что "так пожелаю")]" получить что-нибудь, что название то же самое будет иметь, но детерменизму (конечно, в его редакции с "есть случайность на квантовом уровне") противоречить не будет.
Edited Date: 2015-02-28 08:26 am (UTC)

Profile

leblon: (Default)
leblon

September 2025

S M T W T F S
 12345 6
78910111213
14151617181920
21222324252627
282930    

Style Credit

Expand Cut Tags

No cut tags
Page generated Jan. 8th, 2026 06:29 pm
Powered by Dreamwidth Studios