A Key Factor That Is Missing from Current Theories

That key factor which is missing from current theories is agency. By agency, I mean the capacity to make real choices by exercising our will, ethical choices for which we are responsible. I will show a connection between agency and meaning. And since I have already shown that to translate we must consider meaning, I will then have shown that there is a connection between agency and translation. Any ‘choice’ that is a rigid and unavoidable consequence of the circumstances is not a real choice that could have gone either way and is thus not an example of agency. A computer has no real choice in what it will do next. Its next action is an unavoidable consequence of the machine language it is executing and the values of data presented to it. I am proposing that any approach to meaning that discounts agency will amount to no more than the mechanical manipulation of symbols such as words, that is, moving words around and linking them together in various ways instead of understanding them. Computers can already manipulate symbols. In fact, that is what they mostly do. But manipulating symbols does not give them agency and it will not let them handle language like humans. Symbol manipulation works only within a specific domain, and any attempt to move beyond a domain through symbol manipulation is doomed, for manipulation of symbols involves no true surprises, only the strict application of rules. General vocabulary, as we have seen, involves true surprises that could not have been predicted.

The claim that agency must be included in an approach to meaning is perhaps unexpected. I will draw on five different sources to support this claim: (1) some work by Terry Warner (BYU Dept. of Philosophy); (2) some work by John Robertson (BYU Dept. of Linguistics); (3) some thoughts on physics by Jack Cohen, a biologist, and Ian Stewart (a mathematician); (4) some work on neural science by Antonio Damasio; and (5) an analysis of Shakespeare’s Othello by Nancy Christiansen (BYU Dept. of English). So far as I know, these various parties have never collaborated, yet they are presenting various pieces of what is beginning to look like a coherent framework of support for the importance of agency in fully explaining human language.

Terry Warner has been working on the notions of agency and self-deception for many years. I remember studying his writings on the subject already back in 1982. But at the time, I did not see the connection with translation. Of course, there is a connection between translation and language. You need to know at least two languages in order to translate. So the key question is whether language depends on agency. If it does, then translation depends on agency, too, at least sensitive translation of general language texts. Then, a few years ago, the BYU Dept. of Philosophy organized a seminar on the philosopher Levinas. This brought Terry and me together in a new way and eventually resulted my seeing a connection between agency and language and in the writing of a joint paper, which has now been expanded into a book (Melby and Warner, 1995 [in press; see references]). I will not talk here about our general collaborative work, but only of the use we have made of Levinas. Levinas talks about otherness. Someone who is ‘other’ is outside of you and not under your control. A physical object can, of course, be outside of you yet totally under your control. A physical object can be under your control intellectually, if in no other way, in that you can include a representation of it in a system of ideas that enables you to label it exhaustively and predict its behavior.

We totalize objects in the physical world when we bring them totally under our control. Levinas points out that when we attempt to totalize other humans, we are treating them like objects rather than like humans. But we speak and listen only on the presumption that we are communicating with beings who are not objects but beings with an inner life of their own, just like ours, whose background and individuality we can take into account and who can take into account our background and individuality. That kind of language, not as idealistically represented as if it were a domain language, but in its dynamic reality, has ethics as at least part of its foundation. Note that we have not said that ethics is based on language; we have said that language is based on ethics, making ethics logically prior to language. We present this unusual notion in more detail in chapter 4 of our book and Levinas develops it at length in some of his writing. In order to make ethical choices we must have agency, that is, we must be agents. Unless we regard others as agents, just like us, who in turn regard us as agents, then many key notions that are a basis for general vocabulary become meaningless. Without this interacting agency, there is no responsibility, no empathy or indifference, no blame, and no gratitude. So much becomes missing from language that what is left can be described as a technical domain and handled by a computer. Agency is not a layer on language; dynamic general language is a layer on agency. Without agency, we are reduced to the status of machines and there is no dynamic general language. Without dynamic general language, we would translate like computers and there would be no truly human translation as we now know it. Thus lack of agency is one factor that keeps computers from translating like people.

As I re-read John Robertson’s Barker Lecture, I noticed that on page 15 he points out that if language were just a machine that tells whether or not a sentence is grammatical, then language would not allow personal relations with God and other humans. He notes that there was a war in heaven a long time ago. This war is mentioned in the New Testament (Revelation 12:7). According to other ancient accounts in the Pearl of Great Price quoted by Robertson, the main issue of the war in heaven was whether or not people would have agency. Happily, the pro-agency team won out. Our agency is a prized possession. Neal Maxwell, at the October 1995 General Conference of our university’s sponsoring institution, speaking of will, an essential element of agency as I have defined it, said, “Our will is our only real possession.” The anti-agency team, lead by Lucifer, would have totalized all humans and there would have been no will, no agency, and thus no human language as we know it. We would be like computers sending meaningless data back and forth.

Robertson is exploring an approach to language which, unlike mainstream linguistics, is compatible with agency. Robertson has intensely studied the works of C.S. Peirce and finds in them an approach to language that is compatible with agency. Initially it would appear the Robertson’s approach to language is compatible with the Warner approach in that they both include agency as essential to fully human language.

We now turn from philosophy to physics. Bear with me while I attempt to make a connection between them. The issue I am concerned with is whether our current understanding of physics is compatible with agency. As a youth, I had the impression that physics viewed the world as entirely deterministic. In other words, what will happen next is supposedly determined exactly and precisely by the current state of the physical universe. In a deterministic view of physics, there is no room for human agency because we are part of the deterministic system. If there is no agency, then it should be possible to program a computer to do anything a human can. So it would be nice if physics allowed for agency.
The view of the brain as a deterministic machine is still held by very intelligent people. For example, Patricia Churchland, Professor of Philosophy at the University of California, San Diego, recently (October 12 and 13th, 1995) gave a series of invited lectures on our BYU campus. Two of her titles are revealing: “Understanding the Brain as a Neural Machine” and “Am I Responsible If My Brain Causes My Decisions?” From my own attendance at one of her lectures and from reports of a colleague, it is clear the Churchland holds the view that we have no real agency since our future decisions are completely determined by the current state of the machine we call a brain and by input our brain receives from the outside. However, as we will shortly see, the view of the universe as purely deterministic is out of date in physics.

In their book, The Collapse of Chaos [ references ], Cohen and Stewart take the reader on a tour of modern reductionist science. In the reductionist approach, as already mentioned in the report on John Robertson’s Barker Lecture, the complexity of the world around us is analyzed in terms of simpler constituents that are linked together by relatively simple lawsóthe laws of nature. Typical examples of successful reductionism are the equations for electromagnetic phenomena already referred to in the summary of the Robertson Barker Lecture and the equations predicting the motions of the planets using Newton’s laws. As Robertson has pointed out, an unwise use of reductionism has been damaging in linguistics, but I had assumed that it had been uniformly successful in physics. However, in the past decade or so, some of the implications of chaos theory have begun to sink in. Now even classical physics is not seen as entirely deterministic, even if it is exact in analyzing past events and predicting many future events. There are systems such that even the tiniest differences in initial conditions can lead to large differences at some future time.

Cohen and Stewart, who challenge the assumption that reductionism is sufficient even in physics, ask the intriguing question, if complexity is explained by reductionism, then what explains the simplicity we see around us? As one example, they consider crystals. The structure of a crystal is not readily explained in terms of the detailed vibrations of individual atoms. However, the structure of a crystal is probably influenced by the tendency to minimize energy, and this tendency is contextual rather than reductionistic. They go so far as to state, “We are surrounded by evidence that complicated systems possess features that can’t be traced back [solely] to individual components.” (pp. 426-427) In other words, reductionism is insufficient to explain the physical universe. William E. Evenson of the BYU Physics Department puts it this way (personal communication): “You have to make sure that the individual components are self-consistently adjusted for the context.” You can’t blindly build everything up from individual components without some notion of the big picture. That is quite an admission for a scientists and mathematicians. Cohen and Stewart do not claim that we must believe in God. Indeed, the claim that belief in God is a necessary consequence of science would be incompatible with agency, since there would be no room for faith. [ 2 ] But they point out that modern physics is not incompatible with a belief in God. They even refer to an interpretation of physics that leaves room for human agency. Cohen and Stewart, along with many others, discount some of the speculations of Roger Penrose (1989 and 1994; see [ references ]), a mathematician who thinks that consciousness comes from quantum effects in a certain part of the brain. But they agree that the question of consciousness is an important one.

The interpretation they support comes from Freeman Dyson (Cohen and Stewart, 1994, p. 272) and does not depend on the details of the brain. In quantum mechanics, it is well known that you cannot measure the exact position and speed of a subatomic particle without influencing the position and speed of the particle by the process of measuring. This introduces a truly random element into the physical world, which means that the future is not absolutely determined by the past.[ 3 ] Dyson says that quantum mechanics describes what a system might do in the future, while classical mechanics describes what the system ended up doing in the past. He suggests that our consciousness may be at the moving boundary between future and past, that is, the present. This interpretation of physics says that the future cannot be computed exactly even though the past can be analyzed exactly, leaving open the possibility of free will and thus agency through choices in future action. Hopefully, I have now made a convincing connection between physics and the philosophy of agency.

Dyson’s explanation is reminiscent of the way word meanings shift. They are unpredictable in advance, as in the treacle example, but they can always be analyzed when in the past and a motivation can be established in retrospect. Cohen, Stewart, and Dyson have opened up to me a new view of physics. This new view is compatible with both modern physics and with a linguistics based on agency rather than deterministic generation of sentences.

Until recently, I assumed that the highest levels of translation would require a personal understanding of emotions, but I did not see any connection between emotions and other mental functions needed for human-like translation. From brain science comes surprising support for a connection between emotion and human reasoning. Human reasoning is an essential aspect of agency. What good does it do to have the ability to make choices if one cannot use even common sense reasoning in making decisions. Now, on the basis of recent studies, the need for emotions is not a separate requirement for human-like translation. Agency and human reasoning ability imply feeling emotions, because without emotions, human reasoning is impaired. Antonio Damasio, a well-respected neurologist, has published an intriguing book,Descartes’ Error [ references ], which challenges the claim made by Descartes that reason and emotion should be kept separate. Damasio draws on case studies of unfortunate people who are the victims of damage to a certain area of the brain, damage that robs them of the ability to feel emotions. Damasio shows conclusively that the inability to feel emotions hinders their ability to reason normally and make common sense decisions. For one thing, they become insensitive to punishment. In a way they may lose some part of their agency, since they can no longer feel emotions. Considerably more work is needed, probably in the form of masters and doctoral theses, before firm conclusions can be drawn. But in these cases of brain damage, along with a loss of common sense, I predict there will be a detectable loss of ability to produce sensitive translations of certain kinds of texts well, unless the patient’s memory of having felt emotions is sufficient to maintain a full capability in language. This discussion should be continued as more evidence accumulates.

Finally, we turn back to Shakespeare and find that he may have understood the connection between language and agency all along. Nancy Christiansen (see [ references ]) points out that Othello is trapped because he can only see one interpretation of events at a time. We could say that he loses some of his agency by getting trapped in a domain. Iago, on the other hand, is acutely aware that multiple interpretations of the same facts are possible, but denies some part of his agency by denying any connection between ethics and choices. Shakespeare, meanwhile, sits back and sees all sides. He recognizes agency (which is ethics in action) as the basis of language. I wonder what would have happened if Shakespeare had been chosen as the linguist of his day. Perhaps everyone would be convinced that agency is needed for human-like translation. This effort to balance agency and determinism would then be much ado about nothing. Shakespeare saw the wave of determinism that has engulfed our generation, and saw beyond it. Great literature was never taken in. Further dialogue with Christiansen on agency and language is in order.

These five sources fit together in that they all are compatible with the claim that agency is essential to the richness of normal human language, as opposed to machine-like domain language. Warner speaks of both language and agency being based on ethics. Robertson claims that agency is essential to the development of relationships. Cohen, Stewart, and Dyson show that agency is compatible with modern physics. Damasio shows that fully human reason, which is essential to agency, is tied to emotions. And Christiansen shows us that Shakespeare understood the connection between ethics, language, and agency long before I started thinking about it.

Our concepts are not based on some absolute self-categorization of the physical universe. They are based in part on the ethical dimension of our relationships with others. Our agency, which includes both emotion and reason and the ability to choose how we will respond to demands placed on us by others, is the basis for human language as opposed to machine-like language.

Finally, we can answer the question of this paper. A computer cannot translate more like a person because it lacks, among other things, agency. It won’t suffice to store massive amounts of information. Without agency, information is meaningless. So a computer that is to handle language like a human must first be given agency. But we should be careful, because if we give agency to a computer it may be hard to get it back and the computer, even if it chooses to learn a second language, may exercise its agency and refuse to translate for us. Douglas Robinson (1992; see [ references ]) puts it well. He asks whether a machine translation system that can equal the work of a human might not “wake up some morning feeling more like watching a Charlie Chaplin movie than translating a weather report or a business letter.”