Will the computer outwit man?

A middle-aged man in a suit sits in front of a checker board, working a computer's control panel.
Arthur Samuel playing Samuel Checkers on an IBM 7090 in 1959.
Reprint Courtesy of IBM Corporation ©

This feature appeared in the Oct. 1, 1964 issue of Fortune.

Ever since it emerged from the mists of time, the human race has been haunted by the notion that man-made devices might overwhelm and even destroy man himself. The sorcerer’s apprentice who almost drowned his world, Frankenstein’s frustrated monster who tortured and destroyed his creator, the androids that mimic human beings in the bombastic pages of today’s science-fiction magazines—all play upon the age-old fear that man’s arrogant mind will overleap itself. And now comes the electronic computer, the first invention to exhibit something of what in human beings is called intelligence. Not only is the computer expanding man’s brainpower, but its own faculties are being expanded by so-called artificial intelligence; and the machine is accordingly endowing man’s ancient fears with a reality and immediacy no other invention ever has.

The fears are several and intricately related, but three major ones encompass the lot. The one that worries the columnists and commentators is that the computer will hoist unemployment so intolerably that the free-enterprise system will be unable to cope with the problem, and that the government will have to intervene on a massive scale. This belief, so noisily espoused by offbeat groups like the Ad Hoc Committee on the Triple Revolution, has already been dealt with in this series. It is enough to repeat here that the computer will doubtless go down in history not as the explosion that blew unemployment through the roof, but as the technological triumph that enabled the U.S. economy to maintain the secular growth rate on which its greatness depends.

The second fear is that the computer will eventually become so intelligent and even creative that it will relegate man, or most men, to a humiliating and intolerably inferior role in the world. This notion is based on the fact that the computer already can learn (after a fashion), can show purposeful behavior (narrowly defined), can sometimes act “creatively” or in a way its programer doesn’t expect it to—and on the probability that artificial-intelligence research will improve it enormously on all three counts. Meanwhile there is the third fear, which is that the computer’s ability to make certain neat, clean decisions will beguile men into abdicating their capacity and obligation to make the important decisions, including moral and social ones. This fear as such would be academic if the second one were realized; for if the computer ever betters man’s brainpower (broadly defined), then its judgments will be superior too, and man finally will be outwitted. To appraise both fears, therefore, we must examine artificial-intelligence research, the formidable new science that is striving so industriously to make the computer behave like a human being.

The routes to judgment

The goal of artificial-intelligence research is to write programs or sets of instructions showing the computer how to behave in a way that in human beings would be called intelligent.* The workers proceed on the assumption that human nervous systems process information in the act of thinking; and that given enough observation, experiment, analysis, and modeling, they can instruct a digital computer to process information as humans do. Broadly speaking, they simulate human intelligence in two ways. One is to build actual counterparts of the brain or the nervous system with computer-controlled models of neural networks. The other and more productive approach (so far) is to analyze problems that can be solved by human intelligence and to write a computer program that will solve the problems. Most if not all their toil involves programing the computer “heuristically”—that is, showing it how to use rules of thumb, to explore the likeliest paths, and to make educated guesses in coming to a conclusion, rather than running through all the alternatives to find the right one. Looking closely at their ingenious achievements, which appear so marvelous to the layman, one begins to understand why it is easy to make sweeping projections of the machine’s future as an intelligent mechanism.

But these research workers have a hard if exhilarating time ahead of them. When one turns to living intelligence, one is struck with the colossal job that remains to be done—if the word “job” isn’t too presumptuous to be used in this context at all. If one defines intelligence merely as the ability to adjust to environment, the world is positively quivering with what might be called extracomputer intelligence. Even the lowest species can reproduce and live without instruction by man, something no computer can do. Moreover, the exercise of intelligence in animals, and particularly in higher animals, is a stupendously complex process. As Oliver Selfridge and Ulric Neisser of M.I.T. have put it in a discussion of human pattern recognition, man is continually confronted with a welter of data from which he must pick patterns relevant to whatever he is doing at the time. “A man who abstracts a pattern from a complex of stimuli has essentially classified the possible inputs,” they write. “But very often the basis of his classification is unknown even to himself; it is too complex to be specified explicitly.”

A man adjusting the random wiring network between the light sensors and association unit of scientist Frank Rosenblatt’s Perceptron, or MARK 1 computer, at the Cornell Aeronautical Laboratory, Buffalo, N.Y., circa 1960.
Frederic Lewis—Archive Photos/Getty Images

Yet specify the researcher must if he is to simulate human behavior in the machine. For the electronic computer is a device that processes or improves data according to a program or set of instructions. Since it is equipped with a storage or memory, a stored program, and a so-called conditional transfer, which permits it to make choices, it can be instructed to compare and assess and then judge. Nevertheless, its conclusions are the logical consequences of the data and program fed into it; and it still must be told not merely what to do, but how, and the rules must be written in minute and comprehensive detail. Ordinary programs contain thousands of instructions. The research worker cannot make the machine do anything “original” until he has painstakingly instructed it how to be original, and thus its performance, at bottom, depends on the intelligence or even genius of its preceptor.

Although the men doing artificial-intelligence research are themselves extraordinarily intelligent, they disagree widely and acrimoniously not only about what exactly they expect to do but even about what exactly they have done. In a recent paper John Kelly of Bell Laboratories and M.I.T.’s Selfridge, who themselves disagree about almost everything, agreed that the most controversial subject in this work is whether man’s ability to form concepts and to generalize, an age-old concern of philosophers, can ever be imitated. Some conservative extremists think no machines can ever emulate this faculty, while their opposites believe the problem is about to be solved. The moderates take various stands in between.

Everybody in the intelligence business would probably agree with Kelly and Selfridge that the task of simulating human intelligence has a long way to go. “Our position may be compared to pioneers on the edge of a new continent,” they write. “We have tested our axes on twigs, and made ladders and boats of paper. In principle we can cut down any tree, but obviously trees several miles in girth will take discouragingly long. We can span any river with bridges or boats in principle, but if the river is an Amazon with a thirty-knot current we may not be able to do it in fact. Then again, the continent may be two light-years across. However, as pioneers, what we do not see is a river of molten lava, which at one sight would make us admit the inapplicability of our tools.”

The machine that learns

One man who has chopped more than a few twigs and crossed a stream or two, but has no illusions about the rivers and forests ahead, is Arthur Samuel, consultant to I.B.M.’s director of research. Samuel, who has been classified as close to center in the field of intelligence research, pioneered in machine learning in the late 1950’s by teaching a computer to play checkers so well that it now consistently beats him. To program the machine to play the game, Samuel in effect stored a model of the checkerboard in the computer. Then he instructed the machine to look ahead as a person does, and to consider each move by analyzing the opponent’s possible replies and counterreplies. Although the machine theoretically could search all possible choices, there are 10[superscript 40] such choices in every game; and even the fastest computer would take longer than many think the universe is old to play a game by ticking them all off.

So Samuel instructed the machine to proceed heuristically by feeding it two “routines” that showed it how to learn from experience. One routine told it how to remember past positions and their outcomes, and the other told it how to improve the way it appraises positions. The machine got steadily better, and was soon superior to its master. But this does not mean, Samuel insists, that the computer is categorically superior to man. It beats him not because it uses information or techniques unknown or unknowable to man, but simply because its infallible memory, fantastic accuracy, and prodigious speed allow it to make a detailed but unimaginative analysis in a few seconds that would take man years to make. When, if, and as a championship chess program is constructed, says Samuel, the same generalization will hold. No chess program can yet play much better than an advanced novice; the reason is that chess is vastly more complex than checkers, and nobody has yet devoted enough time or thought to the task.

The limitations of the computer, Samuel likes to put it, are not in the machine but in man. To make machines that appear to be smarter than man, man himself must be smarter than the machine; more explicitly, “A higher order of intelligence, or at least of understanding, seems required to instruct a machine in the art of being intelligent than is required to duplicate the intelligence the machine is to simulate.”

Samuel scoffs at the notion that the great things in store for the machine will give it a will, properly defined, of its own. In relatively few years machines that learn from experience will be common. The development of input and output devices has some way to go, but in twenty years or so businessmen may be able to discuss tasks and problems with computers almost as they now discuss them with other employees; and televideo-phones, for companies that cannot afford their own computer systems, will make central machines as convenient as the telephone itself. Programs will be written to instruct machines to modify their own rules, or to modify the way they modify rules, and so on; and programs will also instruct one machine to program another, and even how to design and construct a second and more powerful machine. However, Samuel insists, the machine will not and cannot do any of these things until it has been shown how to proceed; and to understand how to do this man will have to develop still greater understanding. As the great originator, he will necessarily be on top.

The artificial brain

An apparent exception to such a conclusion, Samuel acknowledges, is that man may eventually build a machine as complex as the brain, and one that will act independently of him. The human brain, which contains roughly 10 billion cells called neurons, works strikingly like a computer because its neurons react or “fire” by transmitting “excitatory” or “inhibitory” electrical impulses to other neurons. Each neuron is connected to 100 others on the average, and sometimes to as many as 10,000, and each presumably gets more than one signal before “firing.” Whether these neurons are at birth connected at random or in a kind of pattern nobody knows, but learning, most research workers think, has something to do with changes in the strength and number of the connections. Some hold that creativity in human beings consists of “an unextinguishable core of randomness”—i.e., creative people possess a lot of random connections.

To imitate the behavior of neurons, several researchers have built “self-learning” or “adaptive” machines using mechanical, electrical, and even chemical circuits, whose connections are automatically strengthened, as they presumably are in the brain, by successful responses. The Perceptron, built by Frank Rosenblatt, a Cornell psychologist, has for years been demonstrating that it can improve its ability to recognize numbers and letters of the alphabet by such a routine. Because such devices are proving useful in pattern recognition, they doubtless will be improved.

Theoretically, these techniques might create a monster capable of acting independently of man; but Samuel argues that the brain’s complexity makes this highly unlikely. Even if great progress is made in imitating this complexity, not enough is known about the interconnections of the brain to construct a reasonable facsimile thereof; the chance of doing so, Samuel guesses, is about the same as the chance that every American will be stricken with a coronary on the same night. As others have estimated it, moreover, the total cost of duplicating all the brain’s cells and connections, even at the ludicrously low cost of only 5 cents per cell and 1 cent per connection, would come to more than $1 quintillion, or $1 billion billion. Some of the brain’s functions can probably be reproduced with vastly fewer cells than the brain contains, however, and the odds on building a tolerably “human” model brain will doubtless improve. But they will improve, Samuel reiterates, only as man improves his understanding of the thinking process; and his ability to control the mechanical brain will increase to the extent he increases his own understanding.

The optimistic extremists

Some enthusiastic and optimistic research scientists feel that such judgments tend to understate the potentialities of the machine. Allen Newell and Herbert Simon of the Carnegie Institute of Technology argue that man is no more or less determinate than the computer; he is programed at birth by his genes, and thenceforth his talents and other traits depend on the way he absorbs and uses life’s inputs. The day will come, they prophesy, when a program will enable the machine to do everything, or practically everything, that a man’s brain can do. Such a program will not call for “stereotyped and repetitive behavior,” but will make the machine’s activity “highly conditional on the task environment”—i.e., on the goal set for it, and on the machine’s ability to assess its own progress toward that goal.

Meanwhile, Newell and Simon insist, the computer will surpass man in some ways. Back in 1957, Simon formally predicted that within ten years a computer would be crowned the world’s chess champion, that it would discover an important new mathematical theorem, that it would write music of aesthetic value, and that most theories in psychology could be expressed in computer programs or would take the form of qualitative statements about the properties of such programs. Simon’s forecast has only three years to go, but he thinks it still is justified, and he sticks to it. Not only that, he confidently looks forward to the day when computers will be tossing off countless problems too ill structured for men to solve, and when the machine will even be able to generalize from experience.

Together with Newell and J. C. Shaw of Rand Corp., Simon has done a great deal of pioneering in artificial intelligence. The trio was one of the first to introduce heuristic methods; and their first notable achievement, in 1956, was the Logic Theory Program, which among other things conjures up proofs for certain types of mathematical and symbolic theorems. It was the Logic Theory Program that independently discovered proofs for some of the theorems in Russell and Whitehead’s Principia Mathematica, and in at least one instance it provided a shorter and more “elegant” proof than Russell and Whitehead themselves.

Simon and Newell’s General Problem Solver, mentioned in the first article in this series, is an ambitious feat that instructs the computer to behave adaptively by solving subproblems before going on to knock off bigger ones, and to reason in terms of means and ends. Using the General Problem Solver, Geoffrey P. E. Clarkson of M.I.T. has successfully instructed a computer to do what an investment trust officer does when he chooses securities for a portfolio. Clarkson analyzed the steps the officer takes, such as appraising the state of the market, and also analyzed these steps according to the postulates of the General Problem Solver. Thereupon Clarkson constructed a program that in several actual tests predicted with astonishing accuracy the trust officer’s behavior, down to the names and number of shares chosen for each portfolio. The General Problem Solver, furthermore, aspires to endow computers with more than such problem-solving faculties. It tries to show how people solve problems and so to provide a tool for constructing theories of human thinking. Its techniques, says Simon, “reveal that the free behavior of a reasonably intelligent human can be understood as the product of a complex but finite and determinate set of laws.”

M.I.T.’s Marvin Minsky has predicted that in thirty years the computer will in many ways be smarter than men, but he concedes that it will achieve this high state only after very smart men have worked very long hours.
Ann E. Yow-Dyson—Getty Images

One of the most assiduous of the optimists is Marvin Minsky of M.I.T., who believes we are on the threshold of an era that “will quite possibly be dominated by intelligent problem-solving machines.” Minsky has divided the intelligence-research scientists’ achievements and problems into five main groups: (1) search, (2) pattern recognition, (3) learning, (4) planning, and (5) induction. To illustrate: In solving a problem, a worker can program a computer to (1) search through any number of possibilities, but because this usually takes too long, it is enormously inefficient. With (2) pattern-recognition techniques, however, the worker instructs the machine to restrict itself to important problems; and with (3) learning techniques, he instructs it to generalize on its experience with successful models. With (4) so-called planning methods, he chooses a few from a large number of subproblems, and instructs the machine to appraise them in several ways. And finally, (5) to manage broad problems, he programs the computer to construct models of the world about it; then he tries to program the machine to reason inductively about these models—to discover regularities or “laws of nature,” and even to generalize about events beyond its recorded experience.

Minsky has predicted that in thirty years the computer will in many ways be smarter than men, but he concedes that it will achieve this high state only after very smart men have worked very long hours. “In ten years,” he says, “we may have something with which we can carry on a reasonable conversation. If we work hard, we may have it in five; if we loaf, we may never have it.”

What is Susie to Joe?

Merely to describe briefly the artificial-intelligence projects now under way would take volumes. Among the more important are question-answering programs that allow a computer to be interrogated in English: the so-called BASEBALL project at M.I.T.’s Lincoln Laboratory, which answers a variety of queries about American League teams; and the SAD SAM (Sentence Appraiser and Diagrammer-Semantic Analyzer Machine) program of Robert Lindsay of the University of Texas, which constructs a model of a family and tells the computer how to reply to queries about family relationships, such as “How is Susie Smith related to Joe and Oscar Brown, and how are the two Browns related?”

There is Oliver Selfridge’s pioneering work in pattern recognition, which has led to techniques that have progressed from recognizing simple optical characters like shapes to recognizing voices. Pattern recognition, in turn, leads to the simulation of verbal learning behavior; one program that does this is the Elementary Perceiver and Memorizer (EPAM) of Herbert Simon and Edward A. Feigenbaum of the University of California. EPAM provides a model for information processes that underlie man’s power to acquire, differentiate, and relate elementary symbolic material like single syllables; and it has been used to compare computer behavior with human behavior and to construct “adaptive” pattern-recognition models. So far, so good. But the human brain forms concepts as well as recognizing digits and words; and although some work is being done on programing a model of human concept formulation, relatively little progress has been made in telling a machine just how to form concepts and to generalize as people do. And as we have already noted, there is much disagreement about the degree of even that progress.

The impedimenta of the intellect

Nobody has yet been able to program the machine to imitate what many competent judges would call true creativity, partly because nobody has yet adequately defined creativity. As a working hypothesis, Newell, Shaw, and Simon have described creativity as a special problem-solving ability characterized by novelty, unconventionality, high persistence, and the power to formulate very difficult problems. This is fine as far as it goes; a creative person may have and indeed probably needs all these. But he willy-nilly brings to his task much more than these purely intellectual aptitudes. He brings a huge impedimentum of basic emotions and aptitudes that were programed into him congenitally, and have been greatly augmented and modified in a lifetime of conscious and unconscious learning. The terrible temper of his mother’s grandfather, his own slightly overactive thyroid and aberrant hypothalamus, his phlegmatic pituitary, his mysterious frustrations, his phlogistic gonads, his illusions and superstitions, his chronic constipation, his neuroses or psychoses, even the kind of liquor he takes aboard—all combine to color his personality and imagination, and his approach to his work. “No man, within twenty-four hours after having eaten a meal in a Pennsylvania Railroad dining car,” H. L. Mencken used to argue, “can conceivably write anything worth reading.”

Consider the art of musical composition. “Great music,” wrote Paul Elmer More, “is a kind of psychic storm, agitating to fathomless depths the mystery of the past within us.” All composers and musicians may not find this a good description—nor may some intelligence research workers—but it suggests strikingly what goes into what a qualified judge would identify as a work of musical art. Such a work is not merely the opus of a brain sealed off in a cranium, but the result of a huge inventory of “states” or influences, inherited or acquired, that has colored that brain’s way of performing. The dissimilarities among Dvořák, Mahler Richard Strauss, and Sibelius, to mention four late romantic composers, are probably the result mainly of such obscure inherited and acquired influences. And what dissimilarities they are! Music critics have spent lifetimes and built towering reputations on expounding and appraising just such differences.

The computer’s achievements in creative composition, literary and musical, are remarkable in the sense that Dr. Johnson’s dog who could walk on his hind legs was remarkable: “It is not done well, but you are surprised to find it done at all.” Since the vast bulk of durable music is relatively formal, arranged according to rules, it theoretically should be possible to instruct a computer to compose good music. Some have tried. Perhaps the best example of computer music is the Illiac suite, programed at the University of Illinois. By common consent, the Illiac suite is no great shakes; one of the moderate remarks about it is that repeated hearings tend to induce exasperation. But so, of course, does some “modern” music composed by humans; and it is possible that one day the computer may be programed to concoct music that many regard as good.

Research workers have had some luck in turning out popular jingles on the computer without a hard, long, expensive struggle; in twenty years or so the computer will doubtless be mass-producing ephemeral tunes of the day more cheaply than Tin Pan Alley’s geniuses can turn them out. But it appears that instructing a computer to compose deep, complex, or carefully prepared music is an almost heroic task demanding more talent and time (and money) than simply putting the music down in the first place, and not bothering with the computer at all.

Auto-beatnik creativity

Similar observations apply to creative writing. Man as a writer is the human counterpart of the data processor, but the inputs he has stored and the mind he processes them with are the result of thousands of kinds of influences. Put in focus and challenged by the job at hand, his brain processes the relevant data and comes up with all manner of output, from majestic imperative sentences like “No Parking North of Here” (official sign in one large city) to epic poems and novels in the grand style.

It should surprise no one that the computer can be programed easily and cheaply to do simple jobs of gathering and sorting information, and is in fact doing a rough job of abstracting technical works. As more and more of the world’s knowledge is stored on tape or drums, and as centralized retrieval systems are developed, the computer will be able to dredge up nearly everything available on a given subject and arrange it in some kind of order. If it can save research time, it will prove to be a boon to writers. The machine, moreover, has already been programed to write simple-minded television whodunits, and some believe it will soon supply soap-opera confectors with much of their material, or perhaps eventually convert them into programers. The cost of using the machine will also pace this “progress”; what the computer and programer can do more cheaply than the solo word merchant, they will do.

That creative writing of more complexity and depth is something else again is suggested by a sample of “Auto-beatnik” verses “generated” by computers programed under R. M. Worthy at the Advanced Research Department of General Precision, Inc., Glendale, California. Worthy and his experts have arranged several thousand words into groups, set up sentence patterns and even rhyming rules, and directed the machine to pick words from the groups at random, but in pre-specified order. The computer’s verses are syntactically correct but semantically empty; as Worthy allows, a machine must have an environment, a perception, an image, or an “experience” to write a “significant” sentence. “I am involved in all this nonsense,” says Worthy, “because I am fascinated with language. And eventually this research will be valuable in many things like translation and information retrieval.” Here is one of the machine’s verses:

Lament for a Mongrel
To belch yet not to boast, that is the hug,
The high lullaby’s bay discreetly crushes the bug.
Your science was so minute and hilly,
Yes, I am not the jade organ’s leather programer’s recipe.
As she is squealing above the cheroot, these obscure toilets shall squat,
Moreover, on account of hunger, the room was hot.

A common diversion in computer circles is to speculate on how a programer would instruct a computer to write as well as Shakespeare. Even the untutored layman can see some of the trees in the forest of problems ahead. Before doing anything, the preceptor must do no less than decide exactly what makes Shakespeare so good, which itself is probably a job for a genius. Then he must write the rules; he must formalize his conclusions about the bard’s talents in staggering detail, not omitting even the most trivial implications, so that the machine can proceed logically from one step to another lest it produce elegant gibberish when it starts to “create.” The set of instructions or program for such a project would probably run to several times the length of Shakespeare’s works. And it might demand more talent than Shakespeare himself possessed.

Creation as effective surprise

The indefatigable Newell and Simon, however, are not to be dissuaded. They have speculated on the notion that creativity might not always have to reside in the programer—that the computer on its own could match (not copy) such creations as a Beethoven symphony, Crime and Punishment, or a Cézanne landscape. Although they freely admit that no computer has ever come up with an opus approaching any of these, they suggest that none appears to lie beyond computers.

To create, they hold, is to produce effective surprise, not only in others but in the creator; and in principle a computer might do this. “Suppose,” they have written, “a computer contains a very large program introduced into it over a long period by different programers working independently. Suppose that the computer had access to a rich environment of inputs that it has been able to perceive and select from. Suppose—and this is critical—that it is able to make its next step conditional on what it has seen and found, and that it is even able to modify its own program on the basis of past experience, so that it can behave more effectively in the future. At some point, as the complexity of its behavior increases, it may reach a level of richness that produces effective surprise. At that point we shall have to acknowledge that it is creative, or we shall have to change our definition of creativity.”

They may. Their definition of creativity, critics feel, may be too narrow because it makes too little allowance for human motivation, or the complex mix of emotions and other drives that compels people to behave as they do. The question is not whether the machine can produce something original; any computer can do something trivial or incomprehensible that nobody has ever done before. The question is whether men can show the machine how to create something that will contain enough human ingredients to meet at least a minimum of approval by perceptive human beings specially qualified to judge the creation. The creations of Newell and Simon’s well-educated computer might amount to expensive nonsense unless the computers were fed a vast number of brilliant instructions on how to handle the sophisticated inputs, and unless these inputs included human motivations.

Human motivation and the computer

Human motivations, Ulric Neisser believes, must be considered not only by workers who would instruct the machine to create, but also by those who would increase its power otherwise to simulate human intelligence. Man’s intelligence, he points out, is not a faculty independent of the rest of human life, and he identifies three important characteristics of human thought that are conspicuously absent from existing or proposed programs: (1) human thought is part of the cumulative process of the growth of the human organism, to which it contributes and on which it feeds; (2) it is inextricably bound up with feelings and emotions, and (3) unlike the computer’s behavior, which is single-minded to the exclusion of everything but the problem assigned it, human activity serves many motives at once, even when a person thinks he is concentrating on a single thing. Recent research by George A. Miller of Harvard, Eugene Galanter of Pennsylvania, and Karl Pribram of Stanford suggests that human behavior is much more “hierarchical” and intricately motivated than hitherto assumed, and Neisser thinks that this multiplicity of motives is not a “supplementary heuristic that can be readily incorporated into a problem-solving program.”

It is man’s complex emotional and other drives, in other words, that give his intelligence depth, breadth, and humanity; nobody has yet found a way of programing them into a computer, and Neisser doubts that anybody soon will. He predicts, however, that programing will become vastly more difficult as the machine is used more and more in solving “human” problems. Pattern recognition, learning, and memory will still be research goals, but a harder job will be to inject a measure of human motivation into the machine.

Some feel Neisser errs in suggesting that anybody will want to imitate man’s way of thinking in all its complexities. “The computer can and will be programed to do and be a lot of things,” says one research worker, “including acting just as foolishly as any human being.” In other words, nobody may want to stuff a machine full of the useless mental impedimenta lugged around by humans; the great merit of the machine is that it can think accurately and single-mindedly, untainted by irrelevant emotions and obscure and even immoral drives.

Nevertheless, the world is populated by human beings, and their motivations cannot be overlooked. For if the computer is asked to solve a problem in which human motivation is important it will have to be told exactly what that motivation is, and what to do about it and under what circumstances. That will not be easy.

Meanwhile the prospect for instructing the computer to behave like a real human is not bright; and this is precisely why some fear that the machine’s role as decision maker will be abused. “If machines really thought as men do,” Neisser explains, “there would be no more reason to fear them than to fear men. But computer intelligence is not human, it does not grow, has no emotional basis, and is shallowly motivated. These defects do not matter in technical applications, where the criteria for successful problem solving are relatively simple. They become extremely important if the computer is used to make social, business, economic, military, moral, and government decisions, for there our criteria of adequacy are as subtle and as multiply motivated as human thinking itself.” A human decision maker, he points out, is supposed to do the right thing even in unexpected circumstances, but the computer can be counted on only to deal with the situation anticipated by the programer.

A neat, clean, consistent judgment

In a recent issue of Science, David L. Johnson of the University of Washington and Arthur L. Kobler, a Seattle psychologist, plowed through the subject of misusing the computer. The use of the computer, they concede, inevitably will increase. But it is being called on to act in areas where man cannot define his own ability. There is a tendency to let the machine treat special problems as if they were routine calculations; for example, it may be used to plot the route for a new highway by a routine computation of physical factors. But the computation may overlook the importance of locating the highway where it will not create or compound ugliness.

Johnson and Kobler also feel that the “current tendency of men to give up individual identity and escape from responsibility” is enhanced by the computer. It takes man’s imputs and turns out a neat, clean, consistent judgment without “obsessive hesitation,” commitments, or emotional involvements. In effect, it assumes responsibility; and its neatness and decisiveness can lead men to skip value judgments, to accept unimaginative and partial results as accurate solutions, and to read into its results the ability to solve all problems. Even scientists who are aware of the limitations of machines, the authors reason, can find them so useful in solving narrow and well-defined problems that they may tend to assume the computer can solve all problems. Thus the danger of oversimplifying complex decisions, a danger that has always existed, becomes worse. Another worry is that military computer systems will react so swiftly that the people who nominally make the judgments will not have time to make them. “The need for caution,” Johnson and Kobler conclude, “will be greater in the future. Until we can determine more perfectly what we want from the machines, let us not call on mechanized decision systems to act upon human systems without realistic human processing. As we proceed with the inevitable development of computers and artificial intelligence, let us be sure we know what we are doing.”

These warnings manifestly add up to a one-sided case. They are drawn not from experience, but from supposition and extrapolation. If they do not quite set up a straw man, they do assume that men will take the worst possible course. Some computer people indeed complain wryly that Johnson and Kobler assume computer owners are vastly stupid, or possess a will to self-destruction. But the warnings do make a valid point, and only fools will not be given pause by them.

The very fact that the warnings have been given is evidence that men, on the whole, are not likely to overlook their message. Military and computer experts are already studying the problems raised by the speed of the machines. And to use the warnings to deny the real value of computers would be as foolish as misusing computers. The machines compel men to formulate their problems so much more intelligently and more thoroughly than they ever have that men can hardly be unaware of the shortcomings of their programs. The great majority of computers, as Johnson and Kobler are well aware, are being employed by business. Granted that U.S. business makes mistakes, granted that it has made and will make mistakes with computers, it does not operate in a monopolistic vacuum. Nothing would make a company more vulnerable to smart competitors than to abdicate responsibility to the neat, clean, consistent judgments of a machine.

The computer is here to stay; it cannot be shelved any more than the telescope or the steam engine could have been shelved. Taking everything together, man has a stupendous thing working for him, and one is not being egregiously optimistic to suggest he will make the most of it. Precisely because man is so arduously trying to imitate the behavior of human beings in the computer, he is bound to improve enormously his understanding of both himself and the machine.

*Many researchers shudder at the phrase “artificial intelligence.” Its anthropomorphic overtones, they say, often arouse irrelevant emotional responses—i.e., in people who think it sacrilegious to try to imitate the brain.

The Fortune Archives newsletter unearths the Fortune stories that have had a lasting impact on business and culture between 1930 and today. Subscribe to receive it for free in your inbox every Sunday morning.