Daniel Dennett and the freedom of will
Feb. 21st, 2015 09:16 amLet me begin with definitions. What is free will? According to Dennett, most philosophers distinguish actions which have been “caused” from actions done “for a reason”. The former ones have a cause (physical or physiological), while the latter ones have a goal of which the agent of the action is conscious. The ability to perform the latter kind of actions is what distinguishes agents with a free will from all others. The actions which have a cause often serve some goal too, at least from the point of view of an external rational observer, but if the agent is not aware of this goal, this does not count as a willed action. Say, insects are capable of performing complicated tasks which serve the goal of reproduction, but the program directing them to do these tasks is hard-wired, and they are not capable to reflecting on their own actions. Hence they do not have free will, according to this definition.
Do agents with a free will exist at all, or is it an illusion? The usual argument against free will is that everything in the physical world has a cause. If our brain is merely a collection of neurons, its behavior is determined by the genetic program which formed it, physiological stimuli, and the history of the organism, including all interactions with other agents around it. Therefore all of its responses have a cause, in principle, and the above distinction is incoherent. That is, the notion of free will is logically incoherent, and no distinction can be drawn between a human and, say, a robot with a complex program mimicking a human.
Dennett’s response to this argument is that rational acts are a special subset of all acts, and thus have both a cause and a reason. More precisely, Dennett says that free will is not an illusion, but neither is it fundamental. Rather, it is an “emergent” property. What does it mean? For a physicist the following analogy might help. The fundamental laws of nature do not have dissipation: they are “conservative”. Nevertheless, dissipation is not an illusion, and dissipative properties, like viscosity or conductivity, play a very prominent role in “higher-level” descriptions, e. g. in hydrodynamics. One says that dissipation, and in particular viscosity, are emergent properties. They describe coarse-grained behavior of systems with a large number of identical particles. The use of these notions is not mandatory, but it helps tremendously with understanding the properties of such systems, since a molecular-level description is often not very illuminating. Similarly, Dennett in effect says that free will is a convenient short-hand describing novel behaviors of sufficiently complicated agents.
From this point of view, the real problem is to understand why the intuition of free will is so universal among humans, what sort of agents it applies to, and what limitations this intuition has. In particular, what exactly distinguishes “rational” agents, such as humans, from “blind” agents, such as insects. Below I will sketch Dennett’s understanding and its implications for the problem of constructing an AI which can pass the Turing test.
Before proceeding, let me make a remark regarding the above definition of free will. This definition, while common among philosophers, does not agree with the everyday intuition about what free will is. Usually, free will is associated with a lack of predictability of actions by an agent. Consider, for example, Stanislaw Lem’s novel “The Mask”. In this novel, the protagonist is a sophisticated AI programmed to seek and kill a particular person. But despite the program, the AI develops an affection for her target (who is on the run), and when asked if she will kill the target when she catches up to it, answers “I don’t know”. Then the monk who asked the question declares that the AI is fundamentally no different from a human (“You are truly my sister.”) The reason that the AI is not capable to answer the question is that some of the impulses to action are generated unconsciously, and while the AI does not consciously want to kill the target, it cannot fully control these unconscious impulses. Because of this, the monk concluded that the AI possesses free will. From this viewpoint, to have free will an agent must have both conscious and unconscious goals. This seems to be different from the above “philosophical” definition. Indeed, imagine a completely rational agent all of whose motivations are conscious. Should one regard it as free or not? In this connection note that while the distinction between conscious and unconscious is very intuitive, its precise meaning is not at all obvious. Dennett addresses this second understanding of free will as well, but not very explicitly. I will comment on this below.
Let us now return to what distinguishes rational agents from all other agents. As a preliminary, we need to understand what an agent is. According to Dennett, it is best to approach this question from an evolutionary standpoint. That is, our best examples of agents are living organisms, and all definitions should be tested against these examples. At the most basic level, an agent is a system which is capable of maintaining its internal state despite changing environment. Bacteria, for example, are agents. Many of their functions are a way to achieve homeostasis. In a sense, one can say that a bacteria has a goal of maintaining its own homeostasis. It has other goals too, like reproduction, but this is less important. Say, a robot need not have to reproduce in order to be regarded as an agent, it only needs to be able to act to preserve its own integrity and functional ability. More complicated agents are capable of learning and even planning future actions. But this does not make them rational yet.
Consider now an agent which is not only capable of learning and planning but is also a social animal and can communicate with other similar animals. We will also assume that these animals are not clones of each other. They live in groups and benefit from cooperation, but since they are not genetically identical and have varying learning experiences, it is important for them to be able to model each other’s behavior based on the available evidence. Since these animals are able of complicated planning, modeling an agent’s behavior is virtually impossible without modeling its thought process. But if an agent is capable of modeling other similar agents, including aspects of their thought process, the same machinery can be used to model its own behavior and thought process. Such an agent is capable of creating complex scenarios of future events which incorporate its own responses. These pictures of alternate futures are part of our planning machinery. Having created them, we choose the course of actions which maximizes our “utility function”. From Dennett’s viewpoint, the ability to choose a course of actions based on such a deliberation is free will.
These models of the future are also what our mind interprets as “free will” in the colloquial sense, i. e. as unpredictability. This is really a misnomer, at least according to Dennett, because from his viewpoint a completely rational agent is predictable, but has free will in the “philosophical” sense. We now come to the definition of a rational agent. As far as I can tell, Dennett’s rational agent is an agent of the sort considered in the previous paragraph whose decision process is at least partially conscious. But what is a conscious process? Again, as far as I can tell, a conscious process is a brain process which can be communicated to another agent. The ability to communicate with others is thus regarded as an essential part of what makes us rational. From the evolutionary point of view, the ability to verbalize our reasons probably arose as a way to influence other individuals. Thus language and consciousness are closely related, and in fact the former is regarded as a pre-requisite for the latter. A completely rational agent is an agent who can communicate his whole decision process to another agent. We, of course, are only partially rational, and the unconscious part of the decision process, coupled with a conscious modeling of the future, is what creates the illusion of uncaused actions.
This proposal about the nature of free will has some practical implications. The most important one is probably in the area of AI. An AI which could be called a rational agent in the above sense has to have the following properties. (1) It should be able to communicate with other rational agents (us). (2) It should be able to plan its actions in a way which depends on the environmental factors, including information obtained from other rational agents. (3) It should be able to model the behavior and thought process of other rational agents, taking into account the information obtained during communication. (4) It should be able to create detailed scenarios of the future, including its own reaction to future events, and choose a course of action based on the results of this modeling and its utility function. Seems like a tall order, but not impossible.
(no subject)
Date: 2015-02-21 04:20 pm (UTC)Not sure about the author'd judgment regarding insects: how does he know? Guessing?
OTOH, I got a feeling that the so-called "rational" is considered to be strictly based on a set theory with AC. Would be irrational otherwise.
(no subject)
Date: 2015-02-21 04:37 pm (UTC)(no subject)
Date: 2015-03-19 05:19 pm (UTC)To state it short -- inteligance (as we mean it in common sense) is result of integral work of visual cortex and verbal.
(no subject)
Date: 2015-02-21 07:02 pm (UTC)(no subject)
Date: 2015-02-21 08:41 pm (UTC)(no subject)
Date: 2015-02-21 08:47 pm (UTC)Unfortunately, it is a loop of definition: the ability to express the reasoning define a rational agent and rationality is defined as ability to understand communicated reasoning.
(no subject)
Date: 2015-02-21 10:04 pm (UTC)(no subject)
Date: 2015-02-21 07:28 pm (UTC)2. I believe that we still do not understand the nature of language and communication. We do not really know what it means to "communicate" something to someone (except in isolated subject domains where the scope of possible ideas is finite). We still do not understand what it means to "think" or to have "intelligence" or to "understand one's actions". There are some formal logics that try to describe this, but we are very far from even beginning to grasp this subject. Until this happens, there is no chance we can create something that we will ourselves believe to be a "real AI".
3. We will not be likely to believe that we have a "full AI" until it can behave and talk substantially as a human, including feelings, unconscious desires, psychological instability, and so on. For this, it appears that we need to fully understand the way a human brain works. To understand itself, a brain has to contain a fully explicit, "conscious" model of itself, which seems to be contradictory because a fully explicit model of some object would need to contain more information than the object being modeled. So it seems that the brain cannot fully understand its own workings. Therefore, we will never actually be able to design and create a "real AI".
4. If a "real AI" can be created only in an "emergent" sense, then the situation will be like this: We first create some kind of machine that is more primitive than a "full AI". This machine evolves itself in some unknown way, not explicitly programmed or designed by us, so that eventually the "full AI" capability "emerges" in the machine. Well, this machine is called a baby. We already can make that. This is presumably not what is meant when one hears of projects about "creating an AI".
(no subject)
Date: 2015-02-21 09:14 pm (UTC)2. Communication in general is simply transmission of information. Pretty straightforward. There are indeed different "levels" of communication, and Dennett discusses this at length. As for what "thinking" means, Dennett does not claim to understand it either. Neither does he say that creating an AI is easy. But this is not directly relevant to the question of free will.
3. A single brain cannot contain a model of itself. It also cannot contain a complete text of "War and Peace" or a complete proof of the four-color theorem. Luckily we have such things as writing and computers to help us organize and store large quantities of information.
4. Dennett does not say that an AI has to "emerge" in this sense. Emergence is used in a different sense in his work (the same sense as in the physics literature). It might turn out that an AI has to evolve and learn before it really turns into a full AI. Then we would not have a full control of its final capabilities and would not have a full understanding of its workings either. Designing such an AI would be a tremendous accomplishment nevertheless. (We have not designed babies, Mother Nature did it. So we cannot take credit here. Similarly, it would be a great accomplishment to design a self-replicating mechanism, even though we already have cells.)
(no subject)
Date: 2015-02-22 06:55 am (UTC)I don't think he hever described an experiment that could in principle refute the hypothesis of free will, or the hypothesis that we act "with intention", or "rationally", or whatever. Also in the Salon article he is just chatting away. Without a clear decisive experiment, all this talk about "free will" stays firmly in the word-juggling domain.
2. Transmission of information, yes. If we could reduce our thought processes to information (i.e. to a formal logical description of some kind), we would be in business. But we don't know how to represent our thoughts fully in some formal logical system. So we don't know what it really means to have an "agent that can communicate its thought processes". That's all I'm saying.
I agree with Dennett's description of different levels of communication. Of course, if we had a neuron-by-neuron description of the brain, that wouldn't have helped us.
3. I agree with you here. We could in principle imagine that all the required information is stored in some external medium. However, I believe that the amount of information necessary for representing a human brain in a computer is going to remain beyond reach forever.
4. We already have some self-evolving simulations today ("artificial life") and also we have "machine learning". Obviously this is not enough of an achievement though.
My analogy with babies was actually intended to be more precise. Yes, we do not "design" babies in the same way as we design a computer. However, the amount of understanding that goes into selecting a human partner for procreation is staggering and certainly far more than the amount of information required for designing a computer. Also, the amount of information that would have to "emerge" in the transition to the "full AI" is vastly more than the amount of information that we might be able to "design". So, I think that in a very precise sense, we would be "designing" a full AI in the same sense as we are "designing" babies today. In other words, the difference between the AI that we would be able to design and the "full AI" is going to be so vast that it would not be appropriate to say that we actually designed it.
(no subject)
Date: 2015-02-22 05:18 pm (UTC)(no subject)
Date: 2015-03-03 02:19 am (UTC)If we take this definition at face value, then obviously http://en.wikipedia.org/wiki/SHRDLU is a rational agent because its behavior (in a virtual world) can be "affected by communicating thoughts to it". SHRDLU can make rational decisions based on information (e.g. which block to lift or not to lift), and it can certainly "explain its reasoning" (e.g. it answers that it can't lift a certain block because of obstructions). So SHRDLU is a "rational agent" and has "free will" according to this "empirical" definition.
I'm certain that all of us (including Dennett) would not agree with this conclusion, though.
Generally, there is a pattern among philosophers when they talk about AI. First, they give a set of "definitions" or "criteria" that a computer should fulfill in order to be a "thinking agent", or to have "free will", or to have "real intelligence". Then, a computer program is presented that substantially fulfills these requirements. At this point philosophers promptly explain why this is not good enough and why the criteria for being "really intelligent" need to be changed.
(no subject)
Date: 2015-04-14 12:57 am (UTC)Another reason it is not a rational agent is that it cannot model other rational agents which communicate with it (that is, humans).
But this is a good example because it shows that Dennet's notion of a rational agent depends on the context: it depends on the complexity of the environment as well as the complexity of other agents around it. One can imagine a society of simple robots which operate in a simple SHRDLU-like environment, have hard-wired goals, an ability to communicate with other robots, transmit and interpret information from them, and some ability to learn (which makes these robots non-identical). Imagine also that their learning ability is developed enough to recognize that different robots have somewhat different patterns of behavior and take this into account (for example, ignoring information from robots which tend to "lie"). If the environment is simple, these robots would not be rational with respect to us (i.e. they would not be able to model us), but I would say they are rational with respect to each other. Similarly, to a society of highly intelligent beings humans could appear no more rational than these robots.
Sorry for not answering earlier.
(no subject)
Date: 2015-04-14 07:24 pm (UTC)However, this will not convince anyone. Human psychology strongly resists the idea that a machine can "really think". Philosophers will never accept the idea that any computer "really thinks", no matter what programs are presented to them.
I have a better explanation for this, and a better criterion for what it means to have a "really thinking machine" or a "really strong AI". The explanation for the resistance of philosophers is, briefly, that humans evolved to talk to other humans and to assume that other humans think in a similar way. There is no way to control humans as effectively and mechanically as we control a machine. However, real life forces us to talk to other humans and to attempt to influence their behavior. The way we do this is based on an intuitive grasp (modelling) of certain aspects of other people's behavior and thinking. This intuitive mechanism is activated emotionally whenever we see some creature that resembles a human or a sufficiently highly developed animal, whenever we see something that has a head, a face, etc., and that behaves in a somewhat complicated way. Let's call this mechanism the "recognition of an empirical human". Through this mechanism, people subconsciously identify pet animals and babies also as "empirical humans". So, human psychology will never accept that empirical humans are machines, or that machines are empirical humans. Machines can be controlled effectively and almost entirely mechanically; we know that it is not necessary to think about them in the same way as we think about humans. We know that an effective way of controlling a machine is to press buttons, etc., not to attempt to interact with it using our emotions and social intuitions. So psychologically we cannot accept a machine as an empirical human.
However, if we succeed in creating a machine whose behavior is so complex that we won't be able to control it except by engaging in human-like interactions, we will then instantly believe that it is an empirical human, and we will have to grant it human rights and responsibilities.
(no subject)
Date: 2015-04-14 07:35 pm (UTC)(no subject)
Date: 2015-02-22 05:31 pm (UTC)Коряво, конечно, но как есть (заслуга, полагаю, Георгия "Синергетика" Малинецкого и ко).
(no subject)
Date: 2015-02-28 08:12 am (UTC)- вот тут ошибка и валяется. Из того, что кому-то что-то кажется "интуитивно очевидным" не следует, что оно верно. А уж приступать к исследованию вопроса, начав с посылки, прямо и однозначно дающей ответ на данный вопрос (не говоря уже о "не определив (и далее - не прочистив) понятия")...
Деннет же занимается любимейшим делом компатибилистов - переопределяет (ну, точнее _определяет_) понятие "свободы воли" так, чтобы вместо оксюморонного (если внимательно посмотреть) интуитивного "свобода поступить ровно в той же самой ситуации иначе [(потому что "так пожелаю")]" получить что-нибудь, что название то же самое будет иметь, но детерменизму (конечно, в его редакции с "есть случайность на квантовом уровне") противоречить не будет.