سفارش تبلیغ
صبا ویژن
» Today hit:11 » Yesterday hit:0 » All hit:7698
  • Artificial Inteligece
    » About Us
    Artificial Inteligece
    مهسا میثاقی
    دانشجوی ترم 5رشته کامپیوتر هستم.خوشحال می شم اگه بتونم کمکتون کنم.
    » My logo
    Artificial Inteligece

    »» Chinese room »» date:86/7/29 «» 10:47 ع

    Chinese room

    The Chinese Room argument is a thought experiment and associated arguments designed by John Searle (1980 [1]) as a counterargument to claims made by supporters of what Searle called strong artificial intelligence (see also functionalism).

    The argument is that a computer cannot have understanding, because human beings, when running computer programs by hand, do not acquire understanding. His arguments are taken very seriously in the field of philosophy, but are regarded as invalid by many scientists, including those outside the field of AI.

    Searle laid out the Chinese Room argument in his paper "Minds, Brains, and Programs," published in 1980. Since then, it has been a recurring trope in the debate over whether computers can truly think and understand. Searle argues as follows:

    Suppose that, many years from now, we have constructed a computer that behaves as if it understands Chinese. In other words, the computer takes Chinese characters as input and, following a set of rules (as all computers can be described as doing), produces other Chinese characters, which it presents as output. Suppose that this computer performs this task so convincingly that it easily passes the Turing test. In other words, it convinces a human Chinese speaker that the program is itself a human Chinese speaker. All the questions the human asks are responded to appropriately, such that the Chinese speaker is convinced that he or she is talking to another Chinese-speaking human. The conclusion that proponents of strong AI would like to draw is that the computer understands Chinese, just as the person does.

    Now, Searle asks us to suppose that he is sitting inside the computer. In other words, he is in an enormous room in which he receives Chinese characters, consults a rule book, and processes the Chinese characters according to the rules. Searle notes that he doesn"t, of course, understand a word of Chinese. He simply manipulates what to him are meaningless squiggles, using the rules and whatever other equipment is provided in the room, such as paper, pencils, erasers, and millions of meticulously cross referenced filing cabinets.

    After countless eons in which Searle is manipulating symbols, Searle will produce the answer in Chinese. During all this time, he has never learned Chinese. So Searle argues that his lack of understanding goes to show that computers don"t understand Chinese either, because they are in the same situation as he is. They are mindless manipulators of symbols, just as he is — and they don"t understand what they"re "saying", just as he doesn"t

     History

    In 1980, John Searle published "Minds, Brains and Programs" in the journal Behavioral and Brain Sciences. In this article, Searle sets out the argument, and then replies to the half-dozen main objections that had been raised during his presentations at various university campuses (see next section). In addition, Searle"s article in BBS was published along with comments and criticisms by 27 cognitive science researchers. These 27 comments were followed by Searle"s replies to his critics.

    Over the last two decades of the 20th century, the Chinese Room argument was the subject of many discussions. By 1984, Searle presented the Chinese Room argument in a book, Minds, Brains and Science. In January 1990, Scientific American took the debate to a general scientific audience. Searle included the Chinese Room Argument in his contribution, "Is the Brain"s Mind a Computer Program?" His piece was followed by a responding article, "Could a Machine Think?", written by Paul and Patricia Churchland. Soon thereafter Searle had a published exchange about the Chinese Room with another leading philosopher, Jerry Fodor (in Rosenthal (ed.) 1991).

    The heart of the argument is an imagined human simulation of a computer, similar to Turing"s Paper Machine[2]. The human in the Chinese Room follows English instructions for manipulating Chinese characters, where a computer "follows" a program written in a computing language. The human produces the appearance of understanding Chinese by following the symbol manipulating instructions, but does not thereby come to understand Chinese. Since a computer just does what the human does — manipulate symbols on the basis of their syntax alone — no computer, merely by following a program, comes to genuinely understand Chinese.

    This argument, based closely on the Chinese Room scenario, is directed at a position Searle calls "Strong AI". Strong AI is the view that suitably programmed computers (or the programs themselves) can understand natural language and actually have other mental capabilities similar to the humans whose abilities they mimic. According to Strong AI, a computer may play chess intelligently, make a clever move, or understand language. By contrast, "weak AI" is the view that computers are merely useful in psychology, linguistics, and other areas, in part because they can simulate mental abilities. But weak AI makes no claim that computers can actually understand or be intelligent. The Chinese Room argument is not directed at weak AI, nor does it purport to show that machines cannot think — Searle says that brains are machines, and brains think. It is directed at the view that formal computations on symbols can produce thought.

    We might summarize the narrow argument as a reductio ad absurdum against Strong AI as follows. Let L be a natural language, and let us say that a "program for L" is a program for conversing fluently in L. A computing system is any system, human or otherwise, that can run a program.

    1. If Strong AI is true, then there is a program for L such that if any computing system runs that program, that system thereby comes to understand L.
    2. I could run a program for L without thereby coming to understand L.
    3. Therefore Strong AI is false.

    The second premise is supported by the Chinese Room thought experiment. The conclusion of this argument is that running a program cannot create understanding. The wider argument includes the claim that the thought experiment shows more generally that one cannot get semantics (meaning) from syntax (formal symbol manipulation).

    The core of Searle"s argument is the distinction between syntax and semantics. The room is able to shuffle characters according to the rule book. That is, the room’s behaviour can be described as following syntactical rules. But in Searle"s account it does not know the meaning of what it has done; that is, it has no semantic content. The characters do not even count as symbols because they are not interpreted at any stage of the process.

     

     

     



    مهسا میثاقی
    »» comments ()

    »» Posts Title  
    machin learning
    Chinese room
    Turing test
    Artificial Intelligence