Conversation is the new computer interface says the founder of a 3-month-old AI startup, already valued at $4 billion and backed by Microsoft, Nvidia and billionaires Reid Hoffman, Bill Gates and Eric Schmidt. Inflection’s Mustafa Suleiman and other entrepreneurs and investors riding the “tidal wave” of chatbots, generative AI and deep learning, stand on the shoulders of giants that have chatted with machines for the last seven decades.
This is a story about one of these giants, Yehoshua Bar-Hillel, and the taming of computers.
Bar-Hillel was an Israeli philosopher, mathematician and linguist and a pioneer in the field of machine translation. After a post-doc with Rudolph Carnap at the University of Chicago in 1950, Bar-Hillel moved to MIT where he organized the first international Conference on Mechanical Translation, in 1952.
His practical experience convinced Bar-Hillel that machine translation can work only in situations that are of “a low degree of complexity or are artificially arranged to be so,” where “the rigid mechanical brain can exhibit superiority over the flexible human brain."
Generalizing further from the impossibility of high-quality machine translation to the possibility of computers matching or surpassing human intelligence, Bar-Hillel wrote and talked about “the fallacy of the first step.” The distance from the inability to do something to doing it badly is usually much shorter than the distance from doing something badly and doing it correctly. Many people think, argued Bar-Hillel, that if someone demonstrates a computer doing something that until very recently no one thought it could perform, even if it’s doing it badly, it is only a matter of some further technological developments before it will perform flawlessly. You only need to be patient, so goes the widespread assertion, and eventually you will get it there. But reality proves otherwise, time and time again, cautioned Bar-Hillel.
A dozen years after his machine translation disappointment, at the First Annual Cybernetics Symposium of the American Society for Cybernetics (held on 26-7 October, 1967, with the theme of "Purposive Systems: The Edge of Knowledge"), Bar-Hillel was preparing to talk about AI fallacies and "The Future of Man-Machine Language Systems." MIT’s Seymour Papert, however, pre-empted Bar-Hillel the day before, to tell two stories.
The first story was about child psychologist and educator Edouard Claparède who was Jean Piaget’s predecessor at the University of Geneva—Papert was one of Piaget's protégés. Claparède, according to Papert, advised newly-minted teachers to spend three months in a circus before starting their teaching careers. Why? Because in the circus, if you try to teach the lions a certain trick and you don’t succeed, you can blame only yourself. (and, I guess, suffer the consequences)>
What Papert wanted the audience to learn from this story is that you can teach the computer any trick possible and if someone is not successful within a year or two in writing a program for a certain task (hint: Bar-Hillel), then they should only blame themselves (hint: and not the computer).
The second story was about MIT’s Richard Greenblatt who managed to write a computer chess program that achieved the level of a talented chess player with at least two years’ experience, so Papert claimed. This was achieved, explained Papert, because Greenblatt abandoned the old idea of letting the computer learn from its mistakes and instead proceeded to tame it. (Note that machine learning was an “old idea” in 1967, at least at MIT’s Architecture Machine Group which later became the MIT Media Lab, where Papert collaborated with AI pioneer Marvin Minsky). As Bar-Hillel later observed, Papert did not explain what was the nature of the taming of the computer and what is the difference between taming and learning.
In his talk the next day, Bar-Hillel explained the fallacy of the first step with the example that if a computer plays well for the first six or seven chess moves, one cannot deduce from that that it would know how to play well for the rest of the game. He hastened to add to his prepared remarks, however, that based on what he just heard from Papert, it looked like he was proven wrong. Bar-Hillel admitted he was sad to learn of his blunder but happy to hear that a computer had surpassed his previous assessment.
Disappointed at his own failed estimation of the capabilities of computers, Bar-Hillel later went to MIT and got permission to play chess with Greenblatt’s computer. The computer did very well, making all its moves “by the book.” At the 10th move, Bar-Hillel was fed up with how the game went and chose a move that was not one that would be recommended by chess experts as the best move in this situation. The computer “thought” hard for about a minute and a half (as opposed to previously making its moves within a second or so) and then made such a spectacularly wrong move that it was very clear that Bar-Hillel would win the game. It turned out that the best chess program in the world at the time had some serious defects and weaknesses.
What Bar-Hillel concluded was that the taming of the computer, at least in this case, was all bunk and hogwash. Someone was tamed, but it was Greenblatt, not the computer. The failures of the machine served to train him to write better programs. The cooperation between man and machine is important and it works when programmers learn from their mistakes and improve their programs. But don’t call it “the taming of the shrewish machine,” Bar-Hillel implored AI adherents.
At the Cybernetics conference, Bar-Hillel started his talk with a story of a student of Claparède who got to MIT and asked permission, before he started teaching, to spend three months in a circus. He was allowed to do it and began teaching chess to the lions. Towards the end of the third month, he started working also in the evenings, but the lions still did not make progress in chess. As he learned from his master, he blamed only himself and asked for a three-month extension in which he decided to tackle an even more difficult task—to teach the lions to conduct an intelligent conversation. He (and the lions) made very little progress and when the last night of the third month arrived, he asked permission to stay all night in the lions’ cage.
He was never heard from again.
The lesson, which applies also to computers, is that it is ridiculous to try to make lions do something they simply cannot do because of their (limited compared to humans) innate abilities. As far as we know, the ability to converse intelligently is exclusively human, argued Bar-Hillel. Unfortunately, we don’t know what is it that enables us to do it. If we knew, surmised Bar-Hillel, it is possible that we could, at a significant cost, build a computer or develop a program that could react intelligently to our conversation.
The outstanding gains made by computers have created many illusions and delusions, then and now. Bar-Hillel believed that the great achievements by the very smart people who made great progress with computers simply made them insane. They developed, he said, the ‘conversational mode’ where you converse with the computer line by line. This required special programming and they did some impressive things with it. But they never succeeded in establishing a real foundation for their dreams. The time has not yet come, said Bar-Hillel in late 1960s, for fulfilling their expectations for an intelligent conversation, in “natural language,” with computers.
To make progress, concluded Bar-Hillel, one needs to significantly reduce one’s ambitions.
The entrepreneurs and investors insisting today that they are on the brink of creating artificial general intelligence or AGI ignore Bar-Hillel’s warnings about the fallacy of the first step and having impossible-to-achieve ambitions. Isn’t it better to reduce one’s ambitions (hallucinations?) and focus on achieving a better understanding of human language and communications and, as a result, error-free human-computer conversations? Bar-Hillel thought this would be a “serious intellectual achievement.”
In 1968, Stewart Brand opened the first Whole Earth Catalog with the statement “We are as gods and might as well get used to it.” This sentiment, this hubris, has been driving the pursuit of artificial general intelligence or AGI by entrepreneurs, researchers and investors. They have been ignoring, to the detriment of the ultimate success of their endeavors, Bar-Hillel’s contention that there are many things “between heaven and earth” that we know very little about.
In 1968, Terry Winograd completed his PhD dissertation at MIT, succeeding in moving natural language processing (NLP) forward by reducing ambitions, i.e., creating a closed, made-up world for interacting with a computer. More on today’s chatbots, Bar-Hillel, and Winograd in the following posts.
Lessons Learned From Computer Conversations And Taming AI 70 Years Ago - Forbes
Read More
No comments:
Post a Comment