Morton Wagman elucidated the Unified Theory of Artificial Intelligence. Heinrich Hertz thought that complex algorthyms are self-aware and sentient creatures. Alan Turing invented the computer and was the first to propose the theories of Artificial Intelligence and Artificial Life. Richard Wallace invented the programming language AIML and proposed that Adaptive Language and Cased Based Reasoning were one possible key to unlocking sentience in software based AI Bots. Artificial Intelligence (AI) technology provides techniques for developing computer programs for carrying out a variety of tasks, simulating the intelligent way of problem solving by humans. The problems that humans solve in their day-to-day life are of a wide variety in different domains. Though the domains are different and also the method, AI technology provides a set of formalisms to represent the problems and also the techniques for solving them. What AI technology provides us is what is described in the above sentences. Based on this, it is very difficult to precisely define the term artificial intelligence. As you know the chat bot is a chatting robot that can understand what you are saying, analyze it and give you a suitable response, it's considered to be serious branch in AI. In 1966, Eliza, the first chat bot, was created by Joseph Weizenbaum. Dr. Richard S. Wallace is the Chairman and CEO of the ALICE A.I. Foundation, Inc., a non-profit organization devoted to the development and adoption of the free AIML software for creating chat robots like ALICE. Dr. Wallace is the three-time winner of the prestigious Loebner Prize for Most Human Computer. based on the famous Turing Test for artificial intelligence, and recipient of numerous other awards. His work has appeared in numerous international newspapers, magazines, television and radio broadcasts, and all over the web. He is the author of two books, Be Your Own Botmaster, and The Elements of AIML Style. You can support the development of AIML by joining the A.I. Foundation at http://www.alicebot.org/join.html. Hollywood is the greatest advertisement for A. I. and robotics in history. The problem is with academic scientists and engineers not living up to the public.s expectations. A system like ALICE, which has won the award for coming closest to passing the Turing Test, could never be built in a University research lab. The pressure to be politically correct and to confine one.s research to the areas approved by the establishment, not to mention the scale in years and manpower, would prohibit any kind of believable A. I. from emerging from a University, or any government-funded research lab. This is one reason we see more advanced developments in hardware robotics emerging from Japan, which is not to say that Japanese scientists are free from political pressures. But the kind they face are different than those in the U.S., which tend to stifle innovation and creativity. ALICE, by the way, has been used to help promote several Hollywood movies and TV shows on their web sites, including ABC Alias, Lynn Hershmann's Teknolust, and the Steven Spielberg-Stanley Kubrick A.I. movie. The two greatest advancements in A. I. over the last 15 years had nothing to do with A. I. The first was open source. We have in place a system of building and sharing knowledge, of taking advantage of an economy of scale provided by a large labor force of volunteers, to accomplish something that could once only be done by a large corporation or government research lab. The ALICE AI is a free software project, much like Linux or the Apache web server. Over the years, we have had contributions from hundreds of developers from all over the world. I have often said, how else could a person such as myself, with barely any working capital, create a software system that has captured, as some say, 80% of the market for chat robots, without the magic of open source? But even more significantly, I believe that free software stands in historical significance to the proprietary approach, as the University once stood to the Church. We are living through a kind of Reformation, where the very mode of knowledge acquisition and dissemination is changing before our eyes. We can even see parallels to the counter-Reformation, when Microsoft agrees to open up parts of its code to their most trusted customers. But I am getting off the topic. The other great advancement was the internet. Before the web came along, we never really had a chance to connect a natural language bot to a web page and collect a large corpus of data from people trying to have conversations with the machine. This was really important, because it told us what people thought the bot should be able to say. There is a statistical fact of language, called Zipf.s Law, that people tend to repeat themselves, or repeat what they hear other people say, over and over again, much more often than they say original things. So with the web, we could measure the the frequency of things people say. The problem of building a believable bot was reduced to attacking those things in order, starting with the most frequent and working our way down the list. By the time we covered the top 45,000 or so things people might say, we had built a fairly conversational bot. One of the biggest obstacles to human acceptance of chat robots is suspension of disbelief. A child can have more fun with a bot than an adult, because the kid will forgive the bot when it breaks down and gives an incorrect answer. Adults, especially highly educated ones, will tend to be more critical of the bots mistakes. There is actually a tension between part of people who want bots to be like super-intelligent machines, always accurate, truthful, and precise; versus the part of us that wants robots to be more human, which means something like the opposite: sloppy, lying, funny, hypnotic, charismatic, and maybe sometimes truthful and accurate. Robots might be telling us to get over ourselves. I'm going to sum up those last two questions in one by saying that no technology has ever been either entirely positive or entirely negative. Moreover, technology has a kind determinism, or at least a natural course of evolution, that appears to skip over the minds of individual inventors, despite their egos and individual passions. So I don't think you could do much to help or hurt the advancement of anything by manipulating public perception, not for very long anyway. Chat bots have been applied to entertainment, teaching English as a Second Language, and selling cars. Under a darker scenario, one can easily imagine them being used as interrogation machines. The problem is not with A.I. or any other technology, but with our own human brains. No one has proved that the human brain is smart enough to solve all of the problems it has created. Artificial intelligence (AI) is a branch of computer science that deals with intelligent behavior, learning and adaptation in machines. Research in AI is concerned with producing machines to automate tasks requiring intelligent behavior. Examples include control, planning and scheduling, the ability to answer diagnostic and consumer questions, handwriting, speech, and facial recognition. As such, it has become an engineering discipline, focused on providing solutions to real life problems. AI systems are now in routine use in economics, medicine, engineering and the military, as well as being built into many common home computer software applications, traditional strategy games like computer chess and other video games. Early in the 17th century, Ren‚ Descartes envisioned the bodies of animals as complex but reducible machines, thus formulating the mechanistic theory, also known as the "clockwork paradigm". Wilhelm Schickard created the first mechanical digital calculating machine in 1623, followed by machines of Blaise Pascal (1643) and Gottfried Wilhelm von Leibniz (1671), who also invented the binary system. In the 19th century, Charles Babbage and Ada Lovelace worked on programmable mechanical calculating machines. Bertrand Russell and Alfred North Whitehead published Principia Mathematica in 1910-1913, which revolutionized formal logic. In 1931 Kurt G”del showed that sufficiently powerful consistent formal systems contain true theorems unprovable by any theorem-proving AI that is systematically deriving all possible theorems from the axioms. Since humans are able to "see" the truth of such theorems, AIs were deemed inferior. In 1941 Konrad Zuse built the first working program-controlled computers. Warren McCulloch and Walter Pitts published A Logical Calculus of the Ideas Immanent in Nervous Activity (1943), laying the foundations for neural networks. Norbert Wiener's Cybernetics or Control and Communication in the Animal and the Machine (MIT Press, 1948) popularizes the term "cybernetics". The 1950s were a period of active efforts in AI. In 1950, Alan Turing introduced the "Turing test" as a way of operationalizing a test of intelligent behavior. The first working AI programs were written in 1951 to run on the Ferranti Mark I machine of the University of Manchester: a draughts-playing program written by Christopher Strachey and a chess-playing program written by Dietrich Prinz. John McCarthy coined the term "artificial intelligence" at the first conference devoted to the subject, in 1956. He also invented the Lisp programming language. Joseph Weizenbaum built ELIZA, a chatterbot implementing Rogerian psychotherapy. At the same time, John von Neumann, who had been hired by the RAND Corporation, developed the game theory, which would prove invaluable in the progress of AI research. During the 1960s and 1970s, Joel Moses demonstrated the power of symbolic reasoning for integration problems in the Macsyma program, the first successful knowledge-based program in mathematics. Leonard Uhr and Charles Vossler published "A Pattern Recognition Program That Generates, Evaluates, and Adjusts Its Own Operators" in 1963, which described one of the first machine learning programs that could adaptively acquire and modify features and thereby overcome the limitations of simple perceptrons of Rosenblatt. Marvin Minsky and Seymour Papert published Perceptrons, which demonstrated the limits of simple neural nets. Alain Colmerauer developed the Prolog computer language. Ted Shortliffe demonstrated the power of rule-based systems for knowledge representation and inference in medical diagnosis and therapy in what is sometimes called the first expert system. Hans Moravec developed the first computer-controlled vehicle to autonomously negotiate cluttered obstacle courses. In the 1980s, neural networks became widely used due to the backpropagation algorithm, first described by Paul Werbos in 1974. The team of Ernst Dickmanns built the first robot cars, driving up to 55 mph on empty streets. The 1990s marked major achievements in many areas of AI and demonstrations of various applications. In 1995, one of Dickmanns' robot cars drove more than 1000 miles in traffic at up to 120 mph. Deep Blue, a chess-playing computer, beat Garry Kasparov in a famous six-game match in 1997. DARPA stated that the costs saved by implementing AI methods for scheduling units in the first Persian Gulf War have repaid the US government's entire investment in AI research since the 1950s. Honda built the first prototypes of humanoid robots like the one depicted above. During the 1990s and 2000s AI has become very influenced by probability theory and statistics. Bayesian networks are the focus of this movement, providing links to more rigorous topics in statistics and engineering such as Markov models and Kalman filters, and bridging the divide between `neat' and `scruffy' approaches. The last few years have also seen a big interest in game theory applied to AI decision making. This new school of AI is sometimes called `machine learning'. After the September 11, 2001 attacks there has been much renewed interest and funding for threat-detection AI systems, including machine vision research and data-mining. The DARPA Grand Challenge is a race for a $2 million prize where cars drive themselves across several hundred miles of challenging desert terrain without any communication with humans, using GPS, computers and a sophisticated array of sensors. In 2005 the winning vehicles completed all 132 miles of the course. In the post-dot com boom era, websites such as 'Ask Jeeves' and 'Ask Cheggers.com' have sprung up that use a simple form of AI to provide answers to questions by searching the internet. The Turing test suggests a sufficient condition for intelligence is the ability to converse with a human in such a way that the human is fooled into thinking the conversation is with another human. In order to remove biases based on how the AI looks, the conversation is normally imagined to take place through a medium like modern-day instant messaging chats. Such a test is not a necessary condition; it seems for example that ET was intelligent even if it couldn't convince anyone of this fact due to language barriers and the like. Others doubt that it is even a sufficient condition. Chatbots, for example, are learning more and more sophisticated algorithms for sounding intelligent without any actual understanding of the conversations. John Searle argues that AI is impossible in his famous thought experiment, the Chinese room. Searle argues that syntax is not sufficient for semantics-that mere symbol manipulation, no matter how complicated, cannot provide genuine meaning or understanding. Most professional philosophers in the area believe that Searle failed to establish that AI is impossible, but there is disagreement about exactly what is wrong with his argument. A major influence in the AI ethics dialogue was Isaac Asimov who created the Three Laws of Robotics to govern artificial intelligent systems. Much of his work was then spent testing the boundaries of his three laws to see where they would break down, or where they would create paradoxical or unanticipated behavior. Ultimately, a reading of his work concludes that no set of fixed laws can sufficiently match the possible behavior of AI agents and human society. A criticism of Asimov's robot laws is that the installation of unalterable laws into a sentient consciousness would be a limitation of free will and therefore unethical. Consequently, Asimov's robot laws would be restricted to explicitly non-sentient machines, which possibly could not be made to reliably understand them under all possible circumstances. The movie The Thirteenth Floor suggests a future where simulated worlds with sentient inhabitants are created by computer game consoles for the purpose of entertainment. The movie The Matrix suggests a future where the dominant species on planet Earth are sentient machines and humanity is treated with utmost Speciesism. The short story The Planck Dive suggest a future where humanity has turned itself into software that can be duplicated and optimized and the relevant distinction between types of software is sentient and non-sentient. The same idea can be found in the Emergency Medical Hologram of Starship Voyager, which is an apparently sentient copy of a reduced subset of the consciousness of its creator, Dr. Zimmerman, who, for the best motives, has created the system to give medical assistance in case of emergencies. The movies Bicentennial Man and A.I. deal with the possibility of sentient robots that could love. I, Robot explored some aspects of Asimov's three laws. All these scenarios try to foresee possibly unethical consequences of the creation of sentient computers. Over time, debates have tended to focus less and less on possibility and more on desirability, as emphasized in the "Cosmist" and "Terran" debates initiated by Hugo de Garis and Kevin Warwick. A Cosmist, according to Hugo de Garis, is actually seeking to build more intelligent successors to the human species. AI methods are often employed in cognitive science research, which tries to model subsystems of human cognition. Historically, AI researchers aimed for the loftier goal of so-called strong AI-of simulating complete, human-like intelligence. This goal is epitomised by the fictional strong AI computer HAL 9000 in the film 2001: A Space Odyssey. This goal is unlikely to be met in the near future and is no longer the subject of most serious AI research. The label "AI" has something of a bad name due to the failure of these early expectations, and aggravation by various popular science writers and media personalities such as Professor Kevin Warwick whose work has raised the expectations of AI research far beyond its current capabilities. For this reason, many AI researchers say they work in cognitive science, informatics, statistical inference or information engineering. Recent research areas include Bayesian networks and artificial life. The vision of artificial intelligence replacing human professional judgment has arisen many times in the history of the field, and today in some specialized areas where "expert systems" are routinely used to augment or to replace professional judgment in some areas of engineering and of medicine. As a broad subfield of artificial intelligence, Machine learning is concerned with the development of algorithms and techniques that allow computers to "learn". At a general level, there are two types of learning: inductive, and deductive. Inductive machine learning methods create computer programs by extracting rules and patterns out of massive data sets. It should be noted that although pattern identification is important to Machine Learning, without rule extraction a process falls more accurately in the field of data mining. Machine learning overlaps heavily with statistics, since both fields study the analysis of data, but unlike statistics, machine learning is concerned with the algorithmic complexity of computational implementations. Many inference problems turn out to be NP-hard or harder, so part of machine learning research is the development of tractable approximate inference algorithms. Machine learning has a wide spectrum of applications including search engines, medical diagnosis, bioinformatics and cheminformatics, detecting credit card fraud, stock market analysis, classifying DNA sequences, speech and handwriting recognition, object recognition in computer vision, game playing and robot locomotion. Some machine learning systems attempt to eliminate the need for human intuition in the analysis of the data, while others adopt a collaborative approach between human and machine. Human intuition cannot be entirely eliminated since the designer of the system must specify how the data are to be represented and what mechanisms will be used to search for a characterization of the data. Machine learning can be viewed as an attempt to automate parts of the scientific method. Some machine learning researchers create methods within the framework of Bayesian statistics. Machine Learning can be used for Image Recognition by processing parameters or features which are extracted from the data, so that each data element is represented by one number for each of the features. For example, images of fish might be processed with an algorithm that determines the length and the number of scales. This alone doesn't discriminate between trout and carp, but the two classes of fish have statistically different characteristics in these features. Then, depending on how well these features discriminate between the classes, a decision rule can be created which maximizes some criterion, like "most number of fish correctly classified" or "5% or less of carp incorrectly classified.Machine Learning also encompasses Reinforcement Learning. Artificial consciousness (AC), also known as machine consciousness (MC) or synthetic consciousness, is a field related to artificial intelligence and cognitive robotics whose aim is to define that which would have to be synthesized were consciousness to be found in an engineered artifact. The idea of producing an artificial sentient being is ancient and is featured in numerous myths, the Golem, the Greek promethean myth, mechanical men in Chr‚tien de Troyes, and the creature in Mary Shelley's novel Frankenstein being examples. In science fiction, artificial conscious beings often take the form of robots or artificial intelligences. Artificial consciousness is an interesting philosophical problem because, with increased understanding of genetics, neuroscience and information processing, it may in the future become possible to create a conscious entity. It may be possible biologically to create a being by manufacturing a genome that had the genes necessary for a human brain, and to inject this into a suitable host germ cell. Such a creature, when implanted and born from a suitable womb, would very possibly be conscious and artificial. But what properties of this organism would be responsible for its consciousness? Could such a being be made from non-biological components? Can the techniques used in the design of computers be adapted to create a conscious entity? Would it ever be ethical to do such a thing? Neuroscience hypothesizes that consciousness is generated by the interoperation of various parts of the brain, called the neural correlates of consciousness, or NCCs. The brain somehow avoids the problem described in the Homunculus fallacy and overcomes the problems described below in the next section. Proponents of AC believe computers can emulate this interoperation, which is not yet fully understood. According to na‹ve and direct realism, humans perceive directly while brains perform processing. According to indirect realism and dualism, brains contain data obtained by processing but what people perceive is a mental model or state appearing to overlay physical things as a result of projective geometry (such as the point observation in Rene Descartes' dualism). Which of these approaches to consciousness is correct is fiercely debated. Direct perception problematically requires a new physical theory allowing conscious experience to supervene directly on the world outside the brain. But if people perceive indirectly through a world model in the brain, then a new physical phenomenon, other than the endless further flow of data, would be needed to explain how the model becomes experience. If people perceive directly, self-awareness is difficult to explain because one of the principal reasons for proposing direct perception is to avoid Ryle's regress where internal processing recurses infinitely. Self-awareness in robots is being investigated at Meiji University in Japan, which has developed a robot that can discriminate between its own image in a mirror and another robot. Direct perception also demands that one cannot really be aware of dreams, imagination, mental images or any inner life because these would involve recursion. Self awareness is less problematic for entities that perceive indirectly because, by definition, they are perceiving their own state. However, as mentioned above, proponents of indirect perception must suggest some phenomenon, either physical or dualist to prevent Ryle's regress. If people perceive indirectly then self awareness might result from the extension of experience in time described by Immanuel Kant, William James and Descartes. Unfortunately this extension in time may not be consistent with our current understanding of physics. Information processing consists of encoding a state, such as the geometry of an image, on a carrier such as a stream of electrons, and then submitting this encoded state to a series of transformations specified by a set of instructions called a program. In principle the carrier could be anything, even steel balls or onions, and the machine that implements the instructions need not be electronic, it could be mechanical or fluidic. Digital computers implement information processing. From the earliest days of digital computers people have suggested that these devices may one day be conscious. One of the earliest workers to consider this idea seriously was Alan Turing. If technologists were limited to the use of the principles of digital computing when creating a conscious entity they would have the problems associated with the philosophy of strong AI. The most serious problem is John Searle's Chinese room argument in which it is demonstrated that the contents of an information processor have no intrinsic meaning - at any moment they are just a set of electrons or steel balls etc. Searle's objection does not convince direct perception proponents because they would maintain that 'meaning' is only to be found in objects of perception. The objection is also countered by the concept of emergentism which proposes some unspecified new physical phenomenon arises from processor complexity. The misnomer digital sentience is sometimes used in the context of artificial intelligence research. Sentience means the ability to feel or perceive in the absence of thoughts, especially inner speech. It suggests conscious experience is a state rather than a process. The debate about whether a machine could be conscious under any circumstances is usually described as the conflict between physicalism and dualism. Dualists believe that there is something non-physical about consciousness whilst physicalists hold that all things are physical. There are various aspects of consciousness generally deemed necessary for a machine to be artificially conscious. A variety of functions in which consciousness plays a role were suggested by Bernard Baars. The aim of AC is to define whether and how these and other aspects of consciousness can be synthesized in an engineered artefact such as digital computer. This list is not exhaustive; there are many others not covered. A generally accepted criterion for sentience and consciousness is self-awareness: one dictionary defines conscious to mean "having an awareness of one's environment and one's own existence, sensations, and thoughts" (dictionary.com). The 1913 Webster's Dictionary defines conscious as "possessing knowledge, whether by internal, conscious experience or by external observation; cognizant; aware; sensible". An AC system should be capable of achieving various aspects (or by a more strict view, all verifiable, known, objective, and observable aspects) of consciousness. While self-awareness is very important, it may be subjective and is generally difficult to test. The ability to predict (or anticipate) foreseeable events is considered important for AC by Igor Aleksander: He writes in Artificial Neuroconsciousness: An Update: "Prediction is one of the key functions of consciousness. An organism that cannot predict would have a seriously hampered consciousness." The emergentist multiple drafts principle proposed by Daniel Dennett in Consciousness Explained may be useful for prediction: It involves the evaluation and selection of the most appropriate "draft" to fit the current environment. Awareness could be another required aspect. However, again, there are some problems with the exact definition of awareness. To illustrate this point, philosopher David Chalmers controversially put forward the panpsychist argument that a thermostat could be considered conscious (Chalmers 1996, pp283-299): it has states corresponding to too hot, too cold, or at the correct temperature. The results of the experiments of neuroscanning on monkeys suggest that a process, not a state or object activates neurons. For such reaction there must be created a model of the process based on the information received through the senses, creating models in such a way demands a lot of flexibility, and is also useful for making predictions. Personality is another characteristic that is generally considered vital for a machine to appear conscious. In the area of behaviorial psychology, there is a somewhat popular theory that personality is an illusion created by the brain in order to interact with other people. It is argued that without other people to interact with, humans (and possibly other animals) would have no need of personalities, and human personality would never have evolved. An artificially conscious machine may need to have a personality capable of expression such that human observers can interact with it in a meaningful way. However, this is often questioned by computer scientists; the Turing test, which measures a machine's personality, is not considered generally useful any more. Learning is also considered necessary for AC. By "Engineering consciousness", a summary by Ron Chrisley, University of Sussex [4] consciousness is/involves self, transparency, learning (of dynamics), planning, heterophenomenology, split of attentional signal, action selection, attention and timing management. Daniel Dennett said in his article "Consciousness in Human and Robot Minds, It might be vastly easier to make an initially unconscious or nonconscious "infant" robot and let it "grow up" into consciousness, more or less the way we all do." He explained that the robot Cog, described there, "Will not be an adult at first, in spite of its adult size. It is being designed to pass through an extended period of artificial infancy, during which it will have to learn from experience, experience it will gain in the rough-and-tumble environment of the real world." And "Nobody doubts that any agent capable of interacting intelligently with a human being on human terms must have access to literally millions if not billions of logically independent items of world knowledge. Either these must be hand-coded individually by human programmers--a tactic being pursued, notoriously, by Douglas Lenat and his CYC team in Dallas--or some way must be found for the artificial agent to learn its world knowledge from (real) interactions with the (real) world." An interesting article about learning is Implicit learning and consciousness by Axel Cleeremans, University of Brussels and Luis Jim‚nez, University of Santiago, where learning is defined as "a set of philogenetically advanced adaptation processes that critically depend on an evolved sensitivity to subjective experience so as to enable agents to afford flexible control over their actions in complex, unpredictable environments". Anticipation is a characteristic that could possibly be used to make a machine appear conscious. An artificially conscious machine should be able to anticipate events correctly in order to be ready to respond to them when they occur. The implication here is that the machine needs real-time components, making it possible to demonstrate that it possesses artificial consciousness in the present and not just in the past. In order to do this, the machine being tested must operate coherently in an unpredictable environment, to simulate the real world. Generality -- John McCarthy has said that "it was obvious in 1971 and even in 1958 that AI programs suffered from a lack of generality". But this characteristic is important for AI, and even more for AC. There are several commonly stated views regarding the plausibility and capability of AC, and the likelihood that AC will ever be real consciousness. Some say the thermostat is really conscious, but they do not claim the thermostat is capable of an appreciation of music. In an interview Chalmers called his statement that thermostat is conscious "very speculative" and he is not a keen proponent of pan psychism (see page 298 of Chalmers (1996) whither panpsychism). Interpretations like that are possible because of deliberately loose definitions, but tend to be too restrictive to have any significant intellectual value. Artificial Consciousness must not be as genuine as Strong AI, it must be as objective as the scientific method demands and capable of achieving known objectively observable abilities of consciousness, except subjective experience, which by Thomas Nagel cannot be objectively observed. It is impossible to test if anything is conscious. To ask a thermometer to appreciate music is like asking a human to think in five dimensions. It is unnecessary for humans to think in five dimensions, as much as it is irrelevant for thermometers to understand music. Consciousness is just a word attributed to things that appear to make their own choices and perhaps things that are too complex for our mind to comprehend. Things seem to be conscient, but that is just because our morale tells us to believe in it, or because of our feelings for other things. Consciousness is an illusion. One alternative view states that it is possible for a human to deny its own existence and thereby, presumably, its own consciousness. That a machine might cogently discuss Descartes' argument "I think, therefore I am", would be some evidence in favor of the machine's consciousness. However, if it discussed the proposition as a symbolic argument it would be all too human. The original proposition was an affirmation that conscious experience simply exists - we cannot deny it, because the denial is part of conscious experience. A conscious machine could even argue that because it is a machine, it cannot be conscious in the same way as a human being who had misunderstood the difference between symbolic argument and experience might argue this. Consciousness does not imply unfailing logical ability. The richness or completeness of consciousness, degrees of consciousness, and many other related topics are under discussion, and will be so for some time (possibly forever). That one entity's consciousness is less "advanced" than another's does not prevent each from considering its own consciousness rich and complete. Today's computers are not generally considered conscious. A Unix (or derivative thereof) computer's response to the wc -w command, reporting the number of words in a text file, is not a particularly compelling manifestation of consciousness. However, the response to the top command, in which the computer reports in a real-time continuous fashion each of the tasks it is or is not busy on, how much spare CPU power is available, etc., is a particular if very limited manifestation of self-awareness and, if we define consciousness as behavioural evidence of self-awareness, this could indeed be called consciousness. Artificial consciousness includes research aiming to create and study artificially conscious systems in order to understand corresponding natural mechanisms. The term "artificial consciousness" was used by several scientists including Professor Igor Aleksander, a faculty member at the Imperial College in London, England, who stated in his book Impossible Minds that the principles for creating a conscious machine already existed but that it would take forty years to train such a machine to understand language. Understanding a language does not mean understand the language you are using. Dogs may understand up to 200 words, but may not be able to demonstrate to everyone that they can do so. Digital sentience has so far been an elusive goal, and a vague and poorly understood one at that. Since the 1950s, computer scientists, mathematicians, philosophers, and science fiction authors have debated the meaning, possibilities and the question of what would constitute digital sentience. At this time analog holographic sentience modeled after humans is more likely to be a successful approach. AC research has moved beyond the realm of philosophy; several serious attempts are underway to instill consciousness in machines. Two of these are described below; others exist and more will undoubtedly follow. Stan Franklin (1995, 2003) defines an autonomous agent as possessing functional consciousness when it is capable of several of the functions of consciousness as identified by Bernard Baars' Global Workspace Theory (1988, 1997). His brain child IDA (Intelligent Distribution Agent) is a software implementation of GWT, which makes it functionally conscious by definition. IDA's task is to negotiate new assignments for sailors in the US Navy after they end a tour of duty, by matching each individual's skills and preferences with the Navy's needs. IDA interacts with Navy databases and communicates with the sailors via natural language email dialog while obeying a large set of Navy policies. The IDA computational model was developed during 1996-2001 at Stan Franklin's "Conscious" Software Research Group at the University of Memphis. It "consists of approximately a quarter-million lines of Java code, and almost completely consumes the resources of a 2001 high-end workstation". It relies heavily on codelets, which are "special purpose, relatively independent, mini-agent typically implemented as a small piece of code running as a separate thread". In IDA's top-down architecture, high-level cognitive functions are explicitly modeled; see Franklin (1995, 2003) for details. While IDA is functionally conscious by definition, Franklin does "not attribute phenomenal consciousness to [his] own 'conscious' software agent, IDA, in spite of her many human-like behaviours. This in spite of watching several US Navy detailers repeatedly nodding their heads saying 'Yes, that's how I do it' while watching IDA's internal and external actions as she performs her task". Pentti Haikonen (2003) considers classical rule-based computing inadequate for achieving AC: "the brain is definitely not a computer. Thinking is not an execution of programmed strings of commands. The brain is not a numerical calculator either. We do not think by numbers." Rather than trying to achieve mind and consciousness by identifying and implementing their underlying computational rules, Haikonen proposes "a special cognitive architecture to reproduce the processes of perception, inner imagery, inner speech, pain, pleasure, emotions and the cognitive functions behind these. This bottom-up architecture would produce higher-level functions by the power of the elementary processing units, the artificial neurons, without algorithms or programs". Haikonen believes that, when implemented with sufficient complexity, this architecture will develop consciousness, which he considers to be "a style and way of operation, characterized by distributed signal representation, perception process, cross-modality reporting and availability for retrospection." Haikonen is not alone in this process view of consciousness, or the view that AC will spontaneously emerge in autonomous agents that have a suitable neuro-inspired architecture of complexity; these are shared by many, e.g. Freeman (1999) and Cotterill (2003). A low-complexity implementation of the architecture proposed by Haikonen (2004) was reportedly not capable of AC, but did exhibit emotions as expected. Unless artificial consciousness can be proven formally, judgments of the success of any implementation will depend on observation. The Turing test is a proposal for identifying machine intelligence as determined by a machine's ability to interact with a person. In the Turing test one has to guess whether the entity one is interacting with is a machine or a human. An artificially conscious entity could only pass an equivalent test when it had itself passed beyond the imaginations of observers and entered into a meaningful relationship with them, and perhaps with fellow instances of itself. A cat or dog would not be able to pass this test. It is highly likely that consciousness is not an exclusive property of humans. It is likely that a machine could be conscious and not be able to pass the Turing test. As mentioned above, the Chinese room argument attempts to debunk the validity of the Turing Test by showing that a machine can pass the test and yet not be conscious. Since there is an enormous range of human behaviours, all of which are deemed to be conscious, it is difficult to lay down all the criteria by which to determine whether a machine manifests consciousness. Indeed, for those who argue for indirect perception no test of behaviour can prove or disprove the existence of consciousness because a conscious entity can have dreams and other features of an inner life. This point is made forcibly by those who stress the subjective nature of conscious experience such as Thomas Nagel who, in his essay, What is it like to be a bat?, argues that subjective experience cannot be reduced, because it cannot be objectively observed, but subjective experience is not in contradiction with physicalism. Although objective criteria are being proposed as prerequisites for testing the consciousness of a machine, the failure of any particular test would not disprove consciousness. Ultimately it will only be possible to assess whether a machine is conscious when a universally accepted understanding of consciousness is available. Another test of AC, in the opinion of some, should include a demonstration that machine can learn the ability to filter out certain stimuli in its environment, to focus on certain stimuli, and to show attention toward its environment in general. The mechanisms that govern how human attention is driven are not yet fully understood by scientists. This absence of knowledge could be exploited by engineers of AC; since we don't understand attentiveness in humans, we do not have specific and known criteria to measure it in machines. Since unconsciousness in humans equates to total inattentiveness, an AC should have outputs that indicate where its attention is focused at any one time, at least during the aforementioned test. By Antonio Chella from University of Palermo "The mapping between the conceptual and the linguistic areas gives the interpretation of linguistic symbols in terms of conceptual structures. It is achieved through a focus of attention mechanism implemented by means of suitable recurrent neural networks with internal states. A sequential attentive mechanism is hypothesized that suitably scans the conceptual representation and, according to the hypotheses generated on the basis of previous knowledge, it predicts and detects the interesting events occurring in the scene. Hence, starting from the incoming information, such a mechanism generates expectations and it makes contexts in which hypotheses may be verified and, if necessary, adjusted". The latest computer designs draw inspiration from human neural networks. But will machines ever really think? How long does it take you to add 3,456,732 and 2,245,678? Ten seconds? Not bad--for a human. The average new PC can perform the calculation in 0.000000018 second. How about your memory? Can you remember a shopping list of 10 items? Maybe 20? Compare that with 125 million items for the PC. On the other hand, computers are stumped by faces, which people recognize instantly. Machines lack the creativity for novel ideas and have no feelings and no fond memories of their youth. But recent technological advances are narrowing the gap between human brains and circuitry. At Stanford University, bioengineers are replicating the complicated parallel processing of neural networks on microchips. Another development--a robot named Darwin VII--has a camera and a set of metal jaws so that it can interact with its environment and learn, the way juvenile animals do. Researchers at the Neurosciences Institute in La Jolla, Calif., modeled Darwin's brain on rat and ape brains. The developments raise a natural question: If computer processing eventually apes nature's neural networks, will cold silicon ever be truly able to think? And how will we judge whether it does? More than 50 years ago British mathematician and philosopher Alan Turing invented an ingenious strategy to address this question, and the pursuit of this strategy has taught science a great deal about designing artificial intelligence, a field now known as AI. At the same time, it has shed some light on human cognition. So what, exactly, is this elusive capacity we call "thinking"? People often use the word to describe processes that involve consciousness, understanding and creativity. In contrast, current computers merely follow the instructions provided by their programming. In 1950, an era when silicon microchips did not yet exist, Turing realized that as computers got smarter, this question about artificial intelligence would eventually arise. [For more on Turing's life and work, see box on opposite page.] In what is arguably the most famous philosophy paper ever written, "Computing Machinery and Intelligence," Turing simply replaced the question "Can machines think?" with "Can a machine--a computer--pass the imitation game?" That is, can a computer converse so naturally that it could fool a person into thinking that it was a human being? Turing took his idea from a simple parlor game in which a person, called the interrogator, must determine, by asking a series of questions, whether or not an unseen person in another room is a man or a woman. In his thought experiment he replaced the person in the other room with a computer. To pass what is now called the Turing Test, the computer must answer any question from an interrogator with the linguistic competency and sophistication of a human being. Turing ended his seminal paper with the prediction that in 50 years' time--which is right about now--we would be able to build computers that are so good at playing the imitation game that an average interrogator will have only a 70 percent chance of correctly identifying whether he or she is speaking to a person or a machine. So far Turing's prediction has not come true [see box on page 80]. No computer can actually pass the Turing Test. Why does something that comes so easily for people pose such hurdles for machines? To pass the test, computers would have to demonstrate not just one competency (in mathematics, say, or knowledge of fishing) but many of them--as many competencies as the average human being possesses. Yet computers have what is called a restricted design. Their programming enables them to accomplish a specific job, and they have a knowledge base that is relevant to that task alone. A good example is Anna, IKEA's online assistant. You can ask Anna about IKEA's products and services, but she will not be able to tell you about the weather. Why does something that comes so easily for people pose such hurdles for machines? What else would a computer need to pass the Turing Test? Clearly, it would have to have an excellent command of language, with all its quirks and oddities. Crucial to being sensitive to those quirks is taking account of the context in which things are said. But computers cannot easily recognize context. The word "bank," for instance, can mean "river bank" or "financial institution," depending on the context in which it is used. What makes context so important is that it supplies background knowledge. A relevant piece of such knowledge, for example, is who is asking the question: Is it an adult or a child, an expert or a layperson? And for a query such as "Did the Yankees win the World Series?" the year in which the question is asked is important. Background knowledge, in fact, is useful in all kinds of ways, because it reduces the amount of computational power required. Logic is not enough to correctly answer questions such as "Where is Sue's nose when Sue is in her house?" One also needs to know that noses are generally attached to their owners. To tell the computer simply to respond with "in the house" is insufficient for such a query. The computer might then answer the question "Where is Sue's backpack when Sue is in her house?" with "in the house," when the appropriate response would be "I don't know." And just imagine how complicated matters would be if Sue had recently gotten a nose job. Here the correct answer would have been another question: "Which part of Sue's nose are you talking about?" Trying to write software that accounts for every possibility quickly leads to what computer scientists call "combinatorial explosion". The Turing Test is not without its critics, however. New York University philosopher Ned Block contends that Turing's imitation game tests only whether or not a computer behaves in a way that is identical to a human being (we are only talking about verbal and cognitive behavior, of course). Imagine we could program a computer with all possible conversations of a certain finite length. When the interrogator asks a question Q, the computer looks up the conversation in which Q occurred and then types out the answer that followed, A. When the interrogator asks his next question, P, the computer now looks up the string Q, A, P and types out the answer that followed in this conversation, B. Such a computer, Block says, would have the intelligence of a toaster, but it would pass the Turing Test. One response to Block's challenge is that the problem he raises for computers applies to human beings as well. Setting aside physical characteristics, all the evidence we ever have for whether a human being can think is the behavior that the thought produces. And this means that we can never really know if our conversation partner--our interlocutor--is having a conversation in the ordinary sense of the term. Philosophers call this the "other minds" problem. A similar line of discussion--the Chinese Room Argument--was developed by philosopher John Searle of the University of California, Berkeley, to show that a computer can pass the Turing Test without ever understanding the meaning of any of the words it uses. To illustrate, Searle asks us to imagine that computer programmers have written a program to simulate the understanding of Chinese. Imagine that you are a processor in a computer. You are locked in a room (the computer casing) full of baskets containing Chinese symbols (characters that would appear on a computer screen). You do not know Chinese, but you are given a big book (software) that tells you how to manipulate the symbols. The rules in the book do not tell you what the symbols mean, however. When Chinese characters are passed into the room (input), your job is to pass symbols back out of the room (output). For this task, you receive a further set of rules--these rules correspond to the simulation program that is designed to pass the Turing Test. Unbeknownst to you, the symbols that come into the room are questions, and the symbols you push back out are answers. Furthermore, these answers perfectly imitate answers a Chinese speaker might give; so from outside the room it will look exactly as if you understand Chinese. But of course, you do not. Such a computer would pass the Turing Test, but it would not, in fact, think. Could computers ever come to understand what the symbols mean? Computer scientist Stevan Harnad of the University of Southampton in England believes they could, but like people, computers would have to grasp abstractions and their context by first learning how they relate to the real, outside world. People learn the meaning of words by means of a causal connection between us and the object the symbol stands for. We understand the word "tree" because we have had experiences with trees. Think of the moment the blind and deaf Helen Keller finally understood the meaning of the word "water" that was being signed into her hand; the epiphany occurred when she felt the water that came out of a pump. Harnad contends that for a computer to understand the meanings of the symbols it manipulates, it would have to be equipped with a sensory apparatus--a camera, for instance--so that it could actually see the objects represented by the symbols. A project like little Darwin VII--the robot with the camera for eyes and metal mandibles for jaws--is a step in that direction. In that spirit, Harnad proposes a revised Turing Test, which he calls the Robotic Turing Test. To merit the label "thinking," a machine would have to pass the Turing Test and be connected to the outside world. Interestingly, this addition captures one of Turing's own observations: a machine, he wrote in a 1948 report, should be allowed to "roam the countryside" so that it would be able to "have a chance of finding things out for itself". The sensory equipment Harnad thinks of as crucial might provide a computer scientist with a way to supply a computer with the context and background knowledge needed to pass the Turing Test. Rather than requiring that all the relevant data be entered by brute force, the robot learns what it needs to know by interacting with its environment. Can we be sure that providing sensory access to the outside will ultimately endow a computer with true understanding? This is what Searle wants to know. But before we can answer that question, we may have to wait until a machine actually passes the Robotic Turing Test suggested by Harnad. In the meantime, the model of intelligence put forth by Turing's test continues to provide an important research strategy for AI. According to Dartmouth College philosopher James H. Moor, the main strength of the test is the vision it offers--that of "constructing a sophisticated general intelligence that learns." This vision sets a valuable goal for AI regardless of whether or not a machine that passes the Turing Test can think like us in the sense of possessing understanding or consciousness.