Artificial Intelligence
Thompson considers the question, “Can a computer have intelligence, as we understand it in human beings?” In response, he examines a variety of concerns involving the human intellect, which contains subtle features that challenge an exclusively algorithmic analysis. Thompson proposes that creativity, language, learning ability, and facial recognition, could all invite the necessity of a “programmer” outside of an exclusively mechanistic system.
TRANSCRIPT: Artificial Intelligence. Radhadesh, Belgium - 1988 / (203)
Introduction by Yadunandan das: You are listening to Radio Krishna on 87.6 MHz. In the next hour we will hear Sadaputa das, one of our scientists from America. In this recording he’s visiting a temple in Belgium called Radhadesh, and we enter in the middle of a discussion concerning artificial intelligence. Is it possible to create artificial intelligence with for instance a computer? Welcome to Sadaputa das.”
[1:31]
RLT: Of course, on one level, you could say this is a matter of how you want to define words. And I don't think we want to restrict the discussion to that level. If you would like to say that by definition any device that can perform these operations is intelligent, then computers can be intelligent. But what we're really interested in is whether or not a computer can have intelligence as we understand it in human beings. So there are a number of observations to make here. First of all, a computer has to have a programmer. Now, once a computer is programmed it can execute that program, but the program thus far is produced by a human being and entered into the computer in a particular form. Now, as far as anyone knows, the human brain does not have a programmer. That is as far as scientific, current scientific, knowledge is concerned. Actually, I would like to make a suggestion - that the human brain does have programmer. I would like to suggest that we don't really wish to object to this parallel which is being drawn between the computer and the human brain. Rather, what we would like to do is say that this parallel should be drawn out further and made more comprehensive. The brain is indeed like a computer. And just as the computer requires a programmer, the brain also requires a programmer. But, what is the programmer of the brain?
So, before getting to that point, let me say a few things about the different examples of intelligence which are given here. In every case you'll see that the computer is solving a particular problem. Now, before the computer solved that problem some human being worked out the solution in an abstract form within his own mind. Now, you might say that in certain cases the computer even learns by itself. The computer even comes up with its own heuristics but, in such cases, the programmer, in advance, figured out an algorithm whereby the computer can come up with its own heuristics. You see, you can solve problems at various levels of generality, and mathematicians do this. Just to give you an example, suppose I presented you with a computer which could solve problems in geometry. You might even ask it to prove that the sum of the squares of the sides of a right triangle equals the square of the hypotenuse. And sure enough, after processing that request for a while, it would come out with a rigorous mathematical proof, starting from the axioms of geometry. So you might say, "Well this computer's exhibiting intelligence if it does that." Now, I think he could do that. In fact, I even think I could do that, as a matter of fact.
But, what would be involved in doing it? What you find is that the human programmer figures out what logical procedures are required to solve a given class of mathematical problems, such as the class of problems in classical Euclidian geometry. That would be a very difficult area in which to do it, but theorem proving programs have been written for more restricted classes of mathematical problems. So, the point is, in advance the human programmer, in his own mind, understood how any such problem could be systematically broken down and solved by a step-by-step procedure involving these logical functions here, the Boolean operations. So, the computer can do that after that program has been entered into it. The real work there, I would submit, has been done by the human mathematician who worked out this systematic scheme, and the computer merely puts it into execution.
Now, if you would like to define that as intelligence, that's okay. In that case, the computer has intelligence and there's no objection to saying that. But, what about the function of creating that algorithm? Of arriving at the algorithm? Can a computer do that? Well, as a matter of fact, you could even conceive of a computer that can invent algorithms. But how will the computer invent algorithms? It will only do so if some human being invents a meta-algorithm which can be used by the systematic logical steps to generate algorithms, which you could even conceivably do. Practically, there's no limit that we know of what you can do in mathematics apart from certain limitations provided, say, by like Bell’s Theorem, and things like that. So, it is conceivable that a human being, who had very great insight in mathematics, could devise a general logical scheme that could solve a whole class of mathematical problems that people, in general, find it very difficult to solve. And, if he could do that then it follows, inevitably from that, the computer could be programmed to execute that algorithm and, in that case, the computer could solve those problems. So, the interesting thing is that human beings can somehow or other come up with these ideas. Computers can only execute them once human beings can come up with them.
[7:40]
So, the question is, how does one come up with these ideas? Now, one could even propose in principal that we may come up with an ultimate algorithm which would account for all human creativity. In other words, if you were to put this algorithm in a computer, the computer would act creatively as a human being, fully. So, one could consider whether or not that is possible. At the present time such ideas are exceedingly utopian. We have no real basis in terms of experience we have with computers, thus far, for saying we can actually do something like that. Computers can solve certain classes of problems that mathematicians understand in advance very well, but the idea of a computer that can exhibit general creativity, well, no one has thus far exhibited the creativity needed to come up with such an algorithm. So, whether that is possible or not is a matter of sheer conjecture. It's not, therefore, really relevant to a discussion of artificial intelligence. Srila Prabhupada of course gave many times the post-dated blank check argument. Namely, that it is not valid to say ‘well in the future we will do a particular thing’. He was interested only in what people can actually do at the present time.
Now, this playing of chess, for example, there are computers now that can beat even master chess players, but they cannot yet beat the best human chess players. But those computers operate according to strategies that people have worked out and the computer can simply execute the strategy much faster than a human being can. So that's how these things operate. Somebody has understood the chess strategy in advance and programmed the computer with that strategy. So, that's one point to make concerning computers and intelligence.
Now, another very significant and practical point to make here is that all the things that computers can do well represent only one small facet of what human beings do in their normal social interactions. And that is an extremely, you might say, abstract, circumscribed and artificial facet, involving formal mathematics. But, let's consider other things a computer can do. Someone comes into a room and you recognize him. You say, "Hello George. How are you doing?" You may have seen him from the side, let's say. Or you may catch just a glimpse of him. Immediately you recognize who that is. Can a computer do that? Well, in principle, you might think that a computer can do it, but no one has ever succeeded in programming a computer to do that. It's an extremely difficult problem. At MIT we were once just having a discussion with people who worked on the “SHRDLU” program, which was invented by a fellow named Terry Winograd. This was one of the great triumphs of artificial intelligence. This is a program in which you had a sort of artificial world of blocks, like children's toy blocks of different kinds. And these were of different colors and they could be stacked on top of one another. So you could type into the computer a command in English saying, "Pick up the green block that's sitting on the blue block next to a stack of two red blocks and put it into a box." And then the computer would do that. Or else, if it couldn't figure it out, it would say, "This is ambiguous. There are two green blocks in that position. Which one do you want me to pick?" and so forth. So, in this block's world the computer could answer questions and behave in response to those questions as though it understood what it was being asked. And so we were asking, "Well, how does the computer recognize these blocks and their relationships?" This involves a whole analysis of edges and corners and how they relate and so forth. It's quite complex. So, we asked, "Could you do this with curved surfaces?" They said, "We have no idea how to do it with curved surfaces." So, that's one point about computers.
[12:05]
Actually at the present time people are thinking that the approach to artificial intelligence that has been pursued at places like MIT for the last 20 years or so is not going to be effective and they're now looking into what are called neural nets as a method of simulating what the human brain can do. These computers are what they call sequential processors. They perform one step at a time very rapidly. It is beginning to appear that, although according to what is called Turing’s Thesis - a computer can perform any computation that is conceivable - nonetheless, in practice, in real time, it is not possible for a computer of the sequential nature to duplicate many functions of the brain. What to speak of intelligence, it would not be possible to have a computer that in real time can even exhibit the visual performance of, say, a dog or even a bumble bee. The pattern recognition paths are really not practically attainable even using a Cray Supercomputer, or something like that, if you want to perform those tasks in real time. So they're looking at the neural nets and so forth. So this is an important point considering the limitations of what people have been able to do with computers.
There's another whole line of thought here, which is related to this. This has to do with knowledge representation. How can you represent within the memory of a computer the knowledge that a 5-year old child has about his mother's kitchen? You can pose this question. This has been very seriously considered by researchers in artificial intelligence. You see, in all the functions that are described here, the data that the machine manipulates is of fairly limited and circumscribed nature. For example, the position of a man on a chessboard is very easy to represent. You have 64 squares and so and so many different men. So it's a simple matter to represent that in a computer; but you may ask, "How do you represent this knowledge that a child has of his mother's kitchen?” Just consider how much knowledge there is. There's the knowledge of what an apple looks like. So you can tell that there's an apple from many different angles. If you see a slice of an apple you can tell that that's a slice of an apple, as opposed to a spoon. So, they know what spoons are, knives, forks, what the stove is, what the soot on the pans is, so many different things. If you even pause and try to innumerate all that information and somehow order it in a logical way, you'll see it's a very formidable task just to define all that information. So, in order for computers to handle that, first of all, as things stand today, people have to be able to define that information and represent it somehow within the machine. Of course there's the idea of programming the machine so it will learn all that information by itself, just like a child does. But no one has been able to come up with a program like that. So, in supposedly complex things that seem to take a lot of intelligence, like say playing chess, the computer may be very good, but in going into a kitchen and noticing that a cookie is something different from a spoon or a kettle on the stove, etc., this is beyond the capacity of computers, as far as present-day progress is concerned. So these are a few points concerning computers, not at all touching on the whole topic of consciousness
Question: [unclear]
[17:41]
Answer: Well, first, your statements rely heavily on post-dated blank checks. In other words, they're talking about "We can't do it now because we don't understand how the brain works, and we haven't been able to figure how to program computers to do these things, and computers don't have sufficient capacity. That's why we haven't been able to do it yet, but in the future we'll be able to do these things." This is a basic point of logic, but this is not valid reasoning in any situation. So, one has to stick to the actual facts and the fact is that, at present, computers have not been able to do all these utopian things that you're talking about.
Now, concerning programming, you started out with a fallacy by saying that human beings are being programmed throughout their childhood development through the educational process and so forth. Of course this is the tabula rasa idea of the human being, but one has to distinguish between this and the programming of a computer because in the programming of a computer, the programmer is truly dealing with tabula rasa. He has a certain set of instructions that he can work with that are supplied by the computer language that he's using. In terms of those instructions he has to figure out within his own mind a logical set of steps which will cause the computer to do the desired task, whatever it may be. Then he writes this down, enters that into the computer and executes it. And, if he has worked out his ideas correctly then the computer will perform in the desired way.
We don't do that when we educate children. We don't do anything like that. We talk to them. We explain ideas to them. Somehow they understand or, in some cases, they don't understand somehow. But whatever's going on in their minds is a complete mystery to us actually. No one understands what's going on in the mind of a child or any other human being when you teach him something. And no one can really explain why one child understands it very quickly and another does not. And, in any case, certainly no one is understanding how the neurons are hooked together and seeing that, well, if we systematically reconnect them in this way then the brain will produce the following functions and so forth. So, it's quite incorrect to suggest that the word “programming”, as applied to computers, similarly applies to human beings. That would be a misuse of language. So, in actual fact, we see that human beings are very far from being a tabula rasa.
And many things that human beings come up with cannot be accounted for in terms of their education. This includes computers themselves. I think it fair to say that there's nothing in the education of John von Neumann which would account for the fact that he created the idea of the general purpose digital computer. Or in the life of Alan Turing, who did the same thing in England about the same time. Obviously, there are many other people who had similar education to these men and they didn't invent the computer or, perhaps, didn't invent anything in their entire lives. So, how do you account for this inventive process? That is something that one should at least consider with an open mind. Now, as for what computer algorithms are able to do, in fact we don't yet see computers that exhibit this kind of inventiveness. And, even if, as you say, the computer discovers a law, you can be sure that the programmer understood that that algorithm was such that would lead to the revelation of that law. You think the programmers are so stupid? No, they can understand the sort of thing that the computer can come up with. And, of course, you see we get into a fine point here. If you program a computer to compute 100 digits of pi, it may do that, and you may not know those 100 digits of pi. So you don't know what is coming out, but you understand the general principles by which it's producing the result. And in the case of computers that perform various operations and so forth, the human programmer understands the principles by which a computer is producing a given result. It's actually a significant question of how the human brain gets programmed. That is where the programming in the computer sense really comes from.
Of course, people indeed do not understand the human brain. As far as the capacity of computers is concerned, as I was saying, the indication is we'll need parallel processors to duplicate human sensory functions. It is probable that most of the functions of visual perception can be reproduced using suitable parallel processors. Basically, you have to have something on the same level of organization as the brain which has an estimated (well, there are different estimates) of about a 100 billion neurons interconnected in a very complex pattern. I myself would think that if you can make a computer with the equivalent of a 100 billion neurons interconnected together in the right way you can duplicate all kinds of different human functions, but still there's the question of where the organization of the actual programming comes from.
[23:53]
I'd like to make another point about what young children do. It's a rather amazing thing to see. They learn to speak. It's quite remarkable because they somehow figure out grammar when they're about two or three years old and they can't even explain to you how they do it, but somehow they do it. In a few years they're speaking fluently a particular language that they've been exposed to. Linguists who carefully make an academic or scientific study of language...you know they'll go through all kinds of intricate lines of abstract reasoning in trying to understand grammar. But here this child just picks it up. It would be very interesting to program a computer that could learn to speak. I personally think that, in principle, you might even do that, but I'd like to see you do it without understanding the basic principles of how a child learns to speak. Stabs in the dark generally don't work in the area of computer programming. There's a standard saying “GIGO” - "Garbage in and garbage out." It describes what happens with computers. Generally speaking, if you don't know what you're doing you're not going to get a good result. In fact, universally that’s the case.
Even if you think you know what you’re doing, you’ll probably have to spend a lot of time trying to debug the program to iron out the flaws in your own reasoning that you didn’t think were there. Actually the computer is very merciless in revealing the flaws in the reasoning of the programmer. And most of the time spent on computer programming, including producing all these artificial intelligence programs that produce these various amazing results, most of that work is spent in ironing out the flaws in the algorithm. The person writes his algorithm, has his computer execute it, and thinks they ought to work properly. So then he thinks “how does that happen?” And eventually he figures out why it wasn’t working properly. Then he fixes his algorithm. Then he runs it again and something else goes wrong. You can spend hours and hours and hours in labour doing this. And this is what hackers, as they’re called, really do. So the end product is not just some marvel of intelligence that manifests in the machine. I think you could argue that it’s the intelligence of the human programmer that has finally been crystallized in a sufficiently precise form so that now it can be automatically executed by the computer. But we don’t yet have computers that can do this programming work for us and it’s a bit utopian to say that “well, once we come to develop nanotechnology, and we can produce switching systems on the level of protein molecules and so forth, then computers will acquire these capacities.” That again is the postdated blank cheque idea. So this is still on the negative side.
Q: [unclear]
[28:17]
A: It’s actually a very logical development that we now come to the theory of evolution, really it is, very logical, and that, by the way, is why the theory of evolution is a very important topic for us to discuss. So as far as the idea that the human mental capacity developed as a result of the process of evolution taking place over millions of years, this proposal is not a real contribution to scientific knowledge; but it is simply a red herring, that’s an English phrase anyway, that diverts one’s attention away from the real problems. Because we have no, even if it’s truth - that man has evolved over millions and millions of years from an original single cell organism - even if that’s actually true, we have no idea how that’s come about, nor do we have any means to establish if it’s really true. But let us argue just for the sake of argument that it is true, that if you go back to the Precambrian period you’ll find just single-celled organisms in the oceans of the earth and then you go through a series of stages to metazoan creatures, the first fish, amphibians, reptiles, mammals, monkey-like creatures, apes, finally human beings. In the course of that development how is it that this program in the brain which allows for intelligence has come to be? No one has any idea.
You would have to say if you have a brain that is working at a certain level of organization and you randomly mutated the DNA that defines the organization of that brain, sometimes mutation will increase a defect and the resulting organism will tend to die out in the process of natural competition; other times it may produce an improvement. And in that case by definition the organism will reproduce more effectively. As a result of this process one has to say that human intelligence has arisen, but no one has any idea how human intelligence can arise by such a process. It simply a hand waving argument. We don’t know the first thing about it. Even if it’s really true that it happened that way, we don’t know anything about it. So therefore to say, “While the computers are evolving very rapidly and human beings have had so much longer to evolve, therefore it’s reasonable that they should be better than the computers; but the computers will quickly catch up” - this is not a valid argument.
Actually the kind of argument that you should really pursue, if you’re really interested in how these things work, is to examine actual models, and examine how computer programs actually work, and focus on the real question of how you would make a computer program that would exhibit human creativity and so many different features. Now people in artificial intelligence have been working on it for many years. Over 20 years ago Marvin Minsky of MIT was saying that in 20 years you’ll have computers that exceed human level intelligence. So it’s interesting that this number 20 comes up, I don’t know why, but over 20 years ago Minsky was saying, “In 20 years computers will exceed human intelligence”, and his idea is that they will indeed be auto learning computers which will then bootstrap themselves even further and become superhuman. And Minsky said maybe they’ll keep us as pets. That was his proposal. So perhaps fortunately this didn’t come to pass, and there is actually a fair amount of frustration in the artificial intelligence community. And in fact the whole trend in science, as I was mentioning today, is to go into the neural net approach now and abandon the sequential computer method of trying to create intelligence.
So statements about what is going to be done in 20 years don't help one in scientific understanding. That’s the real problem in this postdated blank cheque argument, which is the only argument that they really have, saying in 20 years we will do this. Actually wasn’t it the Frenchman, I don’t speak French, but La Mettrie, he wrote a book called the L’homme Machine. How would you pronounce that in French? L'homme machine? So this was in the 18th century that he wrote this. I think it was the 18th century. He outlined all the arguments for why a human being is simply a mechanism, way back then, and here we are in the 20th century and the same argument is being made. So what I would like to do now is make a constructive proposal, and this is to suggest that the human brain actually is programed by a nonphysical entity which interacts with the brain. This is a suggestion for further scientific research.
[34:13]
Q: Is that what you call a ghost in the machine?
A: This is the ghost in the machine. We are proposing to use the sarcastic phrase of Gilbert Ryle, the Cambridge professor. Yes I’m proposing that there’s a ghost in the machine. I’m resurrecting Cartesian dualism. We can bring up the name Descartes. So I would suggest that this would actually be a simple blind investigation. You see, one question you can ask is, in the execution of algorithms, how far do you go from the point of information input to the computer. You see, you can program a computer to do a particular thing; but we find in practice that after a while we are not satisfied with what it’s doing and we want it to do more things. So then we put in more programming and then it does more things. So in the execution of different human functions you can ask, “How long is the sequence of steps involved going from the point where the information has to be reintroduced in order to keep the human entity going?” There happens to be medical evidence that has some bearing on this and this comes from the study of epilepsy. There’s a certain kind of epilepsy, petit mal epilepsy, called automatism, which is well studied in the medical profession. This involves a chaotic discharge of nerve impulses in the particular part of the brain, which is usually set off by some injury, which is there in the brain. It’s a sort of an avalanche effect, just like rolling a snowball down the mountain slope with a lot of snow. It sort of builds up more and more and more until you have a huge avalanche. Well, this occurs in the brain and can cause epilepsy. In some cases the result is the avalanche effects the motor regions of the brain and this causes violent muscular contractions, and the person goes into convulsions.
But there are other cases where the epileptic discharge affects a certain region in the midbrain and the person becomes what is called an automaton. It’s very interesting to see how the person behaves. A person who goes into this state of automatism, they, let’s say, at that time, may be walking home from work. Well he walks all the way home, he stops for traffic, he responds appropriately to lights and signs, he exhibits the ability to read, and so forth. But if you go up and try to engage in conversation with him it’s like talking to a zombie. It’s been known to happen that a person may have an epileptic attack of this kind, say, while playing the piano; the person keeps right on playing but something goes out of the playing of that person. The music becomes mechanical.
There’s one case where a child who was practicing her piano lesson, who had this epileptic problem, and the mother would sometimes notice that her playing changed in the subtle way. All the, sort of, meaning and feeling went out of it, even though she was still playing properly, and she could tell ‘now she’s having one of her attacks’. So one could suggest, in fact, this was suggested by an neuroscientist called Wilder Penfield, (38m.01s) that what we see here is a distinction between what the brain can really do as a computer, or an automaton, and what the brain can do when it’s getting input from some other source, which I propose to be the nonphysical mind.
This is a hypothesis, but at least it’s something you can think about, and it can lead to further fruitful research. So the suggestion here is that in fact the brain automaton can do quite remarkable things. It can even cause the body to walk home through traffic, crossing different streets and so forth, appropriately. But it lacks something - it isn’t able to interact with us in what we would call a really human fashion. So one could propose the following model: that the brain really is like a computer, it performs marvelous information processing tasks, but it is a special-purpose computer the mind is using in order to guide and program the body. And there’s even a conclusion useful from this for devotees, namely, that the brain automaton can chant your rounds for you. It’s really a serious proposal.
Q: Your mind is somewhere else?
A: Your mind is somewhere else. It can happen. Because in fact when we’re inattentive to our rounds, we’re thinking of something else. The voice is going on chanting Hare Krishna, all the syllables are there and so on. That’s the brain automaton and the brain automaton doesn’t make any spiritual advancement because it’s a material system. Meanwhile the soul is focused on a different subject matter quite far removed from Krishna. So that’s just a practical suggestion based on this idea.
Q: Is there evidence to prove it or against it that the mind is continuous, unchanged even in cases where there’s gross damage or permanent or temporary damage to the brain?
[40:20]
A: Well, the whole study of brain damage is very interesting and this of course has bearing on the topic that we deliberately avoided thus far, namely, consciousness. For example, a complete hemispherectomy, that is a removal, a surgical removal, of one cerebral hemispheres, can be carried out under local anesthesia and the patient is conscious during the entire operation. This has actually been done, horrendous though it may seem. So you can remove half of the person’s cerebrum and his consciousness isn’t affected when you do it. Of course when the person recovers from the operation, he is what’s called hemiplegic. One half of his body won’t function and it may be that he’s what is called ‘aphasic’ - he loses the power of speech. But if he does retain any capacity to communicate through speech at all - sometimes this capacity is there but is greatly impaired - he will announce that he was conscious during the whole thing and that he still conscious now. But now he is feeling very frustrated because when he tries to speak it won’t work. Just to give you an example of what some people go through in this regard, there was a Russian soldier who was severely wounded in world war two and recovered. He lost the power of speech but he was still able to read and comprehend. So he wasn’t able to write, however. If he wanted to write, it just would not happen. So what people would do is they would flash words before him cut out from newspapers. Just one word after another, when they came to the next word that he was thinking off, that he wanted, that he couldn’t think of and he couldn’t speak, he couldn’t even point to do letters and spell it out, he couldn’t think of the word but as soon as the word he wanted came before his eyes, he would recognize it. Just as sometimes you may have the experience that you are trying to think of something, but as soon as you see something that reminds you of it, you think of it. So in this way by indicating ‘well that’s the word’ and then indicating the next word and so on he actually wrote essays and explained his situation in light of how frustrating it was. So this is an example.
One could suggest here that the brain-computer in this case was severely impaired due to the damage, but somehow the link up of the mind was intact. And the computer wasn't so severely impaired that it wasn't possible still for the mind to make use of it. At least one could suggest that this might be a fruitful line of inquiry now. I’m not saying that anything I’ve said here proves that, but I do suggest that rather than saying that in 20 years or whatever, computers are going to grow, and all these marvelous things, when in fact we have no idea how to do it.
It is worthwhile considering some alternative hypotheses, especially considering the fact that people have thought very carefully about how to make a computer that can learn in a flexible way like a child. And people haven’t come up with the right insights yet. Even if it’s possible to do that, we don’t have the insights yet as to how to program the computer to do it. It’s easy to get a computer to store data. You’ve got a computer that stores up lots of data. It can easily hold many more slokas than you do, right? But how do you get the computer to exhibit human intelligence and human knowledge? We haven’t thought how to do it yet. So at the present stage of our knowledge it’s worthwhile considering some alternative hypotheses. Scientists should be more open to the total range of data that’s actually there and available for them. So that’s something I can suggest without even bringing in other data, which scientists tend to neglect, such as the out-of-body experience data, which is very interesting.
I suppose I may as well mention it, because if we're speaking of data, we may as well consider what is there. I’ll give you what seems to be a quite scientifically respectable example to this. There’s a doctor William [should be Michael] Sabom who is a cardiologist. His practice has mainly been in Florida. He was attached to the area university medical school at one time. So he was told by a colleague of his about a book by one Raymond Moody called Life after Life, which recounted incidents in which a person had a heart attack and was resuscitated by the doctors, and then reported that he had traveled outside of his body and experienced different things. So a book of case histories of this sort was published. So when Sabom heard about this he was extremely skeptical, and he thought that these stories probably represented hallucinatory dreams that the person had either just before or just after the attack, related to the anxiety of having a close brush with death. One might say that because one realizes that one is actually dying, one begins to imagine how one is separate from the body or something like that. So he thought there’s some explanation like this that will account for these things. So under pressure from his colleague he began to interview his own patients. So he had many case studies from his own practice in which he knew the exact medical procedures that were carried out while the person was unconscious. Not only unconscious but with a heart that was not beating, so no blood was circulating through the brain. So he would interview the patients who reported out-of-body experiences and ask them to explain what was going on during their resuscitation. He found, to his amazement, that a significant number of patients were able to give very detailed descriptions of exactly what was going on, and the procedures were not identical in every case. There are many things that the doctor can do; they do them in different orders and so forth, depending on the particular case.
[47:40]
So they were able to get accurate accounts of what happened when their heart was actually stopped and had been stopped for at least a couple of minutes, because it takes time for the nurses to see that the person is having an attack, and for the doctors to rush them into the intensive care unit and start applying the difference machines they use to resuscitate the person and so forth. He found that none of the patients he interviewed made any serious mistake concerning what had happened. Some of them were quite vague. But none made a serious mistake and many gave considerable detail. As a test he interviewed a whole series of patients who had not had out-of-body experiences. He called these seasoned cardiac patients who had spent a lot of time in hospitals, and had had a reasonable opportunity to acquaint themselves with the medical literature, who had also, since they had a serious heart condition, naturally they were interested in those things. So they had every opportunity to acquaint themselves with what happens in cardiac resuscitation. He found that they tended to make serious mistakes about the procedures in their accounts and also many of the accounts were vague.
So he came to the conclusion that it would appear that some kind of mind activity is going on which involves sense perception and storage of the memories. This can occur when the brain is not functioning, and he considered many alternative explanations. There’s a whole list of them - temporal lobe epilepsy, hypercardia, hyperoxia - there’s a whole list of explanations you could use to try to account for this, and in his book he systematically tries to go through all of them. So he became convinced that it appears that there’s some non-physical thing that has senses and memory and intelligence. So that’s interesting evidence that has bearing on this whole issue. So one can suggest to the artificial intelligence people, to them it’s all well and good to think of the brain as a remarkable computer-like information processing device, but maybe there’s also a program of that brain, maybe it’s more like the automatic pilot of an airplane, which carries out functions in the airplane, including even navigation, as an aide to the pilot, and analogies like that.
[49:49]
Q: [unclear]
A: Well, one wonders that that may be the faith of Marvin Minsky. I don’t know. You see for that to happen the subtle mind would have to interact with the electronics of the computer somehow. Now there’s the question then, “Well, if there really is a subtle mind that is programming the brain- computer mechanism, then how does that work? How does it actually interact with the operations of the neurons and so forth?’ Well, that was of course one of the things that I was discussing the day before yesterday, that these different considerations regarding the laws of physics do suggest a way in which a nonphysical mind, something, by nonphysical here I mean, by definition, more subtle than the ethereal element, just to nail down the term with precise definition, how a nonphysical mind could interact with the nervous system and program the brain. So this also, by the way, is something that scientists could research. For example, this professor I met from Shanghai in China was thinking along these lines himself. He was thinking that these manifestations of deterministic chaos, in his models of the glial cells, suggested to him how the mind might interact with the brain. So that is something that conceivably could be investigated, although admittedly it would be very hard to investigate it. But it’s a line of fruitful scientific research. One can even think of experiments there that one could pursue.
Q: [unclear]
A: Well, you’re defining intelligence as discrimination. So we can think about for a moment. We have to keep in mind this postdated blank check argument that always crops up. You can certainly program a computer to discriminate between different situations. For example, you can make a computer program very easily that can discriminate between a circle and a square, just to give a simple example; it’s practically trivial to do that. So you can say, “Well, that program exhibits discrimination.” Then you can even program a computer to control a robot in such a way that it would make discriminations within the context of its environment so as to avoid some kind of danger. It’s conceivable that you could do that in some particular situation. You really have no problem doing it. But now you’re talking about discrimination involving ideas of right and wrong and so on and so forth. You see, there you come into an area where you simply go beyond any of the knowledge or insights of people working with computers. So one can wave one’s hand and say, “Yeah, in 20 years we’ll do it”, but no one has any idea at the present time, first of all, how to represent, let us say, the state of mind of the individual which forms the basic framework in which the discrimination is being made.
For example, if you want to represent circles and squares that's easy: you take a grid of ones and zeros, and one pattern of ones and zeros will be a circle, and another will be a square. But how do you represent the different things that are there in the human mind involving feelings. For example, how do you discriminate between lust and love, just to give an example; that’s a significant question. It even has very significant practical application. When is it lust and when is it love? How, for computer, would you represent what those things are? We don’t have the faintest idea at this time. So one thing one could propose for the advocates of artificial intelligence and so forth, is that they should also have a little bit of humility concerning their own limitations. One should advertise and boast about what one has actually done instead of just waving one’s hands and saying, “Yes, in 20 years we will duplicate the human being, therefore you all are nothing but machines,” and so forth. And of course that is something that we hear. Artificial intelligence is used as an argument against a spiritual understanding of what a human being is. That’s how Alan Turing originally used it with his imitation game. He was very much concerned with showing that a human being is nothing but a machine.
Yadunandan das: You have listened to a broadcast from Radio Krishna. Yadunandan das was responsible for the production and engineering. Until next time, Hare Krishna!