Mechanical or "formal" reasoning has been developed by philosophers and mathematicians since antiquity. The study of logic led directly to the invention of the programmable digital electronic computer, based on the work of mathematician Alan Turing and others. Turing's theory of computation suggested that a machine, by shuffling symbols as simple as "0" and "1", could simulate any conceivable act of mathematical deduction. This, along with recent discoveries in neurology, information theory and cybernetics, inspired a small group of researchers to begin to seriously consider the possibility of building an electronic brain.
The field of AI research was founded at a conference on the campus of Dartmouth College in the summer of 1956. The attendees would become the leaders of AI research for many decades, especially John McCarthy, Marvin Minsky, Allen Newell and Herbert Simon, who founded AI laboratories at MIT, CMU and Stanford. By 1965, research was also underway in England, led by Donald Michie, who founded a similar laboratory at the University of Edinburgh. These laboratories produced programs that were, to most people, simply astonishing: computers were solving word problems in algebra, proving logical theorems and speaking English. By the middle 60s AI was heavily funded by the U.S. Department of Defense[29] and many were optimistic about the future of the field. Herbert Simon predicted that "machines will be capable, within twenty years, of doing any work a man can do" and Marvin Minsky agreed, writing that "within a generation ... the problem of creating 'artificial intelligence' will substantially be solved".
Deduction, reasoning, problem solving
Early AI researchers developed algorithms that imitated the step-by-step reasoning that human beings use when they solve puzzles, play board games or make logical deductions. By the late 80s and 90s, AI research had also developed highly successful methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.
For difficult problems, most of these algorithms can require enormous computational resources — most experience a "combinatorial explosion": the amount of memory or computer time required becomes astronomical when the problem goes beyond a certain size. The search for more efficient problem solving algorithms is a high priority for AI research.
Human beings solve most of their problems using fast, intuitive judgments rather than the conscious, step-by-step deduction that early AI research was able to model. AI has made some progress at imitating this kind of "sub-symbolic" problem solving: embodied approaches emphasize the importance of sensorimotor skills to higher reasoning; neural net research attempts to simulate the structures inside human and animal brains that gives rise to this skill.
Knowledge representation
Main articles: Knowledge representation and Commonsense knowledge
Knowledge representation and knowledge engineering are central to AI research. Many of the problems machines are expected to solve will require extensive knowledge about the world. Among the things that AI needs to represent are: objects, properties, categories and relations between objects; situations, events, states and time; causes and effects; knowledge about knowledge (what we know about what other people know); and many other, less well researched domains. A complete representation of "what exists" is an ontology (borrowing a word from traditional philosophy), of which the most general are called upper ontologies.
Planning
Intelligent agents must be able to set goals and achieve them. They need a way to visualize the future (they must have a representation of the state of the world and be able to make predictions about how their actions will change it) and be able to make choices that maximize the utility (or "value") of the available choices.
In classical planning problems, the agent can assume that it is the only thing acting on the world and it can be certain what the consequences of its actions may be. However, if this is not true, it must periodically check if the world matches its predictions and it must change its plan as this becomes necessary, requiring the agent to reason under uncertainty.
Learning
Machine learning has been central to AI research from the beginning. Unsupervised learning is the ability to find patterns in a stream of input. Supervised learning includes both classification and numerical regression. Classification is used to determine what category something belongs in, after seeing a number of examples of things from several categories. Regression takes a set of numerical input/output examples and attempts to discover a continuous function that would generate the outputs from the inputs. In reinforcement learning the agent is rewarded for good responses and punished for bad ones. These can be analyzed in terms of decision theory, using concepts like utility. The mathematical analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory.
Natural language processing
Natural language processing gives machines the ability to read and understand the languages that the human beings speak. Many researchers hope that a sufficiently powerful natural language processing system would be able to acquire knowledge on its own, by reading the existing text available over the internet. Some straightforward applications of natural language processing include information retrieval (or text mining) and machine translation.
Perception
Machine perception is the ability to use input from sensors (such as cameras, microphones, sonar and others more exotic) to deduce aspects of the world. Computer vision is the ability to analyze visual input. A few selected sub-problems are speech recognition, facial recognition and object recognition.
Social intelligence
Emotion and social skills play two roles for an intelligent agent. First, it must be able to predict the actions of others, by understanding their motives and emotional states. (This involves elements of game theory, decision theory, as well as the ability to model human emotions and the perceptual skills to detect emotions.) Also, for good human-computer interaction, an intelligent machine also needs to display emotions. At the very least it must appear polite and sensitive to the humans it interacts with. At best, it should have normal emotions itself.
General intelligence
Most researchers hope that their work will eventually be incorporated into a machine with general intelligence (known as strong AI), combining all the skills above and exceeding human abilities at most or all of them. A few believe that anthropomorphic features like artificial consciousness or an artificial brain may be required for such a project.
Many of the problems above are considered AI-complete: to solve one problem, you must solve them all. For example, even a straightforward, specific task like machine translation requires that the machine follow the author's argument (reason), know what is being talked about (knowledge), and faithfully reproduce the author's intention (social intelligence). Machine translation, therefore, is believed to be AI-complete: it may require strong AI to be done as well as humans can do it.
Approaches
There is no established unifying theory or paradigm that guides AI research. Researchers disagree about many issues. A few of the most long standing questions that have remained unanswered are these: should artificial intelligence simulate natural intelligence, by studying psychology or neurology? Or is human biology as irrelevant to AI research as bird biology is to aeronautical engineering? Can intelligent behavior be described using simple, elegant principles (such as logic or optimization)? Or does it necessarily require solving a large number of completely unrelated problems? Can intelligence be reproduced using high-level symbols, similar to words and ideas? Or does it require "sub-symbolic" processing?
Search and optimization
Main articles: Search algorithm, Optimization (mathematics), and Evolutionary computation
Many problems in AI can be solved in theory by intelligently searching through many possible solutions: Reasoning can be reduced to performing a search. For example, logical proof can be viewed as searching for a path that leads from premises to conclusions, where each step is the application of an inference rule. Planning algorithms search through trees of goals and subgoals, attempting to find a path to a target goal, a process called means-ends analysis. Robotics algorithms for moving limbs and grasping objects use local searches in configuration space. Many learning algorithms use search algorithms based on optimization.
Simple exhaustive searches are rarely sufficient for most real world problems: the search space (the number of places to search) quickly grows to astronomical numbers. The result is a search that is too slow or never completes. The solution, for many problems, is to use "heuristics" or "rules of thumb" that eliminate choices that are unlikely to lead to the goal (called "pruning the search tree"). Heuristics supply the program with a "best guess" for what path the solution lies on.
A very different kind of search came to prominence in the 1990s, based on the mathematical theory of optimization. For many problems, it is possible to begin the search with some form of a guess and then refine the guess incrementally until no more refinements can be made. These algorithms can be visualized as blind hill climbing: we begin the search at a random point on the landscape, and then, by jumps or steps, we keep moving our guess uphill, until we reach the top. Other optimization algorithms are simulated annealing, beam search and random optimization.[100]
Evolutionary computation uses a form of optimization search. For example, they may begin with a population of organisms (the guesses) and then allow them to mutate and recombine, selecting only the fittest to survive each generation (refining the guesses). Forms of evolutionary computation include swarm intelligence algorithms (such as ant colony or particle swarm optimization) and evolutionary algorithms (such as genetic algorithms and genetic programming[).
Logic
Logic was introduced into AI research by John McCarthy in his 1958 Advice Taker proposal. Logic is used for knowledge representation and problem solving, but it can be applied to other problems as well. For example, the satplan algorithm uses logic for planning and inductive logic programming is a method for learning.
There are several different forms of logic used in AI research. Propositional or sentential logic is the logic of statements which can be true or false. First-order logic also allows the use of quantifiers and predicates, and can express facts about objects, their properties, and their relations with each other. Fuzzy logic, a version of first-order logic which allows the truth of a statement to be represented as a value between 0 and 1, rather than simply True (1) or False (0). Fuzzy systems can be used for uncertain reasoning and have been widely used in modern industrial and consumer product control systems. Default logics, non-monotonic logics and circumscription are forms of logic designed to help with default reasoning and the qualification problem. Several extensions of logic have been designed to handle specific domains of knowledge, such as: description logics; situation calculus, event calculus and fluent calculus (for representing events and time); causal calculus; belief calculus; and modal logics.
In 1963, J. Alan Robinson discovered a simple, complete and entirely algorithmic method for logical deduction which can easily be performed by digital computers. However, a naive implementation of the algorithm quickly leads to a combinatorial explosion or an infinite loop. In 1974, Robert Kowalski suggested representing logical expressions as Horn clauses (statements in the form of rules: "if p then q"), which reduced logical deduction to backward chaining or forward chaining. This greatly alleviated (but did not eliminate) the problem.
In addition to the subject areas mentioned above, significant work in artificial intelligence has been done on puzzles and reasoning tasks, induction and concept identification, symbolic mathematics, theorem proving in formal logic, natural language understanding and generation, vision, robotics, chemistry, biology, engineering analysis, computer-assisted instruction, and computer-program synthesis and verification, to name only the most prominent. As computers become smaller and less expensive, more and more intelligence is built into automobiles, appliances, and other machines, as well as computer software, in everyday use.
Artificial Intelligence in Medical Diagnosis
In an attempt to overcome limitations inherent in conventional computer-aided diagnosis, investigators have created programs that simulate expert human reasoning. Hopes that such a strategy would lead to clinically useful programs have not been fulfilled, but many of the problems impeding creation of effective artificial intelligence programs have been solved. Strategies have been developed to limit the number of hypotheses that a program must consider and to incorporate path physiologic reasoning. The latter innovation permits a program to analyze cases in which one disorder influences the presentation of another. Prototypes embodying such reasoning can explain their conclusions in medical terms that can be reviewed by the user. Despite these advances, further major research and developmental efforts will be necessary before expert performance by the computer becomes a reality.
We will focus on how improved representations of clinical knowledge and sophisticated problem-solving strategies have advanced the field of artificial intelligence in medicine. Our purpose is to provide an overview of artificial intelligence in medicine to the physician who has had little contact with computer science. We will not concentrate on individual programs; rather, we will draw on the key insights of such programs to create a coherent picture of artificial intelligence in medicine and the promising directions in which the field is moving. We will therefore describe the behavior not of a single existing program but the approach taken by one or another of the many programs to which we refer. It remains an important challenge to combine successfully the best characteristics of these programs to build effective computer-based medical expert systems. Several collections of papers (19-21) provide detailed descriptions of the programs on which our analysis is based.
Function: Clinical Problem-Solving
Any program designed to serve as a consultant to the physician must contain certain basic features. It must have a store of medical knowledge expressed as descriptions of possible diseases. Depending on the breadth of the clinical domain, the number of hypotheses in the database can range from a few to many thousands. In the simplest conceivable representation of such knowledge, each disease hypothesis identifies all of the features that can occur in the particular disorder. In addition, the program must be able to match what is known about the patient with its store of information. Even the most sophisticated programs typically depend on this basic strategy.
The simplest version of such programs operates in the following fashion when presented with the chief complaint and when later given additional facts.
1. For each possible disease (diagnosis) determine whether the given findings are to be expected.
2. Score each disease (diagnosis) by counting the number of given findings that would have been expected.
3. Rank-order the possible diseases (diagnoses) according to their scores.
The power of such a simple program can be greatly enhanced through the use of a mechanism that poses questions designed to elicit useful information. Take, for example, an expansion of the basic program by the following strategy:
4. Select the highest-ranking hypothesis and ask whether one of the features of that disease, not yet considered, is present or absent.
5. If inquiry has been made about all possible features of the highest-ranked hypothesis, ask about the features of the next best hypothesis.
6. If a new finding is offered, begin again with step 1; otherwise, print out the rank-ordered diagnoses and their respective supportive findings and stop.
Advantages: Programs using artificial intelligence techniques have several major advantages over programs using more traditional methods. These programs have a greater capacity to quickly narrow the number of diagnostic possibilities, they can effectively use path physiologic reasoning, and they can create models of a specific patient's illness. Such models can even capture the complexities created by several disease states that interact and overlap. These programs can also explain in a straightforward manner how particular conclusions have been reached. This latter ability promises to be of critical importance when expert systems become available for day-to-day use; unless physicians can assess the validity of a program's conclusions, they cannot rely on the computer as a consultant. Indeed, a recent survey has shown that a program's ability to explain its reasoning is considered by clinicians to be more important than its ability to arrive consistently at the correct diagnosis. An explanatory capability will also be required by those responsible for correcting errors or modifying programs; as programs become larger and more complicated, no one will be able to penetrate their complexity without help from the programs themselves.
Disadvantages: Most approaches to computer-assisted diagnosis have, until the past few years, been based on one of three strategies-flow charts, statistical pattern-matching, or probability theory. All three techniques have been successfully applied to narrow medical domains, but each has serious drawbacks when applied to broad areas of clinical medicine. Flow charts quickly become unmanageably large. Further, they are unable to deal with uncertainty, a key element in most serious diagnostic problems. Probabilistic methods and statistical pattern-matching are typically incorporate unwarranted assumptions, such as that the set of diseases under consideration is exhaustive, that the diseases under suspicion are mutually exclusive, or that each clinical finding occurs independently of all others . In theory, these problems could be avoided by establishing a database of probabilities that copes with all possible interactions. But gathering and maintaining such a massive database would be a nearly impossible task. Moreover, all programs that rely solely on statistical techniques ignore causality of disease and thus cannot explain to the physician their reasoning processes or how they reach their diagnostic conclusions.
Cybernetics is the science of control. Its name, appropriately suggested by the mathematician Norbert Wiener (1894-1964), is derived from the Greek for ‘steersman’, pointing to the essence of cybernetics as the study and design of devices for maintaining stability, or for homing in on a goal or target. Its central concept is feedback. Since the ‘devices’ may be living or man-made, cybernetics bridges biology and engineering.
Stability of the human body is achieved by its static geometry and, very differently, by its dynamic control. A statue of a human being has to have a large base or it topples over. It falls when the centre of mass is vertically outside the base of the feet. Living people make continuous corrections to maintain themselves standing. Small deviations of posture are signaled by sensory signals (proprioception) from nerve fibers in the muscles and around the joint capsules of the ankles and legs, and by the otoliths (the organs of balance in the inner ear). Corrections of posture are the result of dynamic feedback from these senses, to maintain dynamic stability. When walking towards a target, such as the door of a room, deviations from the path are noted, mainly visually, and corrected from time to time during the movement, until the goal is reached. The key to this process is continuous correction of the output system by signals representing detected errors of the output, known as ‘negative feedback’. The same principle, often called servo-control, is used in engineering, in order to maintain the stability of machinery and to seek and find goals, with many applications such as guided missiles and autopilots.
The principles of feedback apply to the body's regulation of temperature, blood pressure, and so on. Though the principles are essentially the same as in engineering, for living organisms dynamic stability by feedback is often called ‘homeostasis’, following W. B. Cannon's pioneering book The wisdom of the body (1932). In the history of engineering, there are hints of the principle back to ancient Greek devices, such as self-regulating oil lamps. From the middle Ages the tail vane of windmills, continuously steering the sails into the veering wind, are well-known early examples of guidance by feedback. A more sophisticated system reduced the weight of the upper grinding stone when the wind fell, to keep the mill operating optimally in changing conditions. Servo-systems using feedback can make machines remarkably life-like. The first feedback device to be mathematically described was the rotary governor, used by James Watt to keep the rate of steam engines constant with varying loads.
Servo-systems suffer characteristic oscillations when the output overshoots the target, as occurs when the feedback is too slow or too weak to correct the output. Changing the ‘loop gain’ (i.e. the magnitude of correction resulting from a particular feedback signal) increases tremor for machines and organisms. It is tempting to believe that ‘intention tremor’ of patients who have suffered damage to the cerebellum is caused by a change in the characteristics of servo control.
Dynamic control requires the transmission of information. Concepts of information are included in cybernetics, especially following Claud Shannon's important mathematical analysis in 1949. It does not, however, cover digital computing. Cybernetic systems are usually analogue, and computing is described with very different concepts. Early Artificial Intelligence (AI) was analogue-based (reaching mental goals by correcting abstract errors) and there has recently been a return to analogue computing systems, with self-organizing ‘neural nets’.
A principal pioneer of cybernetic concepts of brain function was the Cambridge psychologist Kenneth Craik, who described thinking in terms of physical models analogous to physiological processes. Craik pointed to engineering examples, such as Kelvin's tide predictor, which predicted tides with a system of pulleys and levers. The essential cybernetic philosophy of neurophysiology is that the brain functions by such principles as feedback and information, represented by electro-chemical, physical activity in the nervous system. It is assumed that this creates mind: so, in principle, and no doubt in practice, machines can be fully mindful.
Influences
Winograd and Flores credit the influence of Humberto Maturana, a biologist who recasts the concepts of "language" and "living system" with a cybernetic eye [Maturana & Varela 1988], in shifting their opinions away from the AI perspective. They quote Maturana: "Learning is not a process of accumulation of representations of the environment; it is a continuous process of transformation of behavior through continuous change in the capacity of the nervous system to synthesize it. Recall does not depend on the indefinite retention of a structural invariant that represents an entity (an idea, image or symbol), but on the functional ability of the system to create, when certain recurrent demands are given, a behavior that satisfies the recurrent demands or that the observer would class as a reenacting of a previous one." [Maturana 1980] Cybernetics has directly affected software for intelligent training, knowledge representation, cognitive modeling, computer-supported coöperative work, and neural modeling. Useful results have been demonstrated in all these areas. Like AI, however, cybernetics has not produced recognizable solutions to the machine intelligence problem, not at least for domains considered complex in the metrics of symbolic processing. Many beguiling artifacts have been produced with an appeal more familiar in an entertainment medium or to organic life than a piece of software [Pask 1971]. Meantime, in a repetition of history in the 1950s, the influence of cybernetics is felt throughout the hard and soft sciences, as well as in AI. This time however it is cybernetics' epistemological stance — that all human knowing is constrained by our perceptions and our beliefs, and hence is subjective — that is its contribution to these fields. We must continue to wait to see if cybernetics leads to breakthroughs in the construction of intelligent artifacts of the complexity of a nervous system, or a brain.
Cybernetics Today
The term "cybernetics" has been widely misunderstood, perhaps for two broad reasons. First, its identity and boundary are difficult to grasp. The nature of its concepts and the breadth of its applications, as described above, make it difficult for non-practitioners to form a clear concept of cybernetics. This holds even for professionals of all sorts, as cybernetics never became a popular discipline in its own right; rather, its concepts and viewpoints seeped into many other disciplines, from sociology and psychology to design methods and post-modern thought. Second, the advent of the prefix "cyb" or "cyber" as a referent to either robots ("cyborgs") or the Internet ("cyberspace") further diluted its meaning, to the point of serious confusion to everyone except the small number of cybernetic experts.
However, the concepts and origins of cybernetics have become of greater interest recently, especially since around the year 2000. Lack of success by AI to create intelligent machines has increased curiosity toward alternative views of what a brain does [Ashby 1960] and alternative views of the biology of cognition [Maturana 1970]. There is growing recognition of the value of a "science of subjectivity" that encompasses both objective and subjective interactions, including conversation [Pask 1976]. Designers are rediscovering the influence of cybernetics on the tradition of 20th-century design methods, and the need for rigorous models of goals, interaction, and system limitations for the successful development of complex products and services, such as those delivered via today's software networks. And, as in any social cycle, students of history reach back with minds more open than was possible at the inception of cybernetics, to reinterpret the meaning and contribution of a previous era.
Robotics
Robot is a virtual or mechanical artificial agent. In practice, it is usually an electro-mechanical machine which is guided by computer or electronic programming, and is thus able to do tasks on its own. Another common characteristic is that by its appearance or movements, a robot often conveys a sense that it has intent or agency of its own.
While there is no single correct definition of "robot", a typical robot will have several or possibly all of the following characteristics.
It is an electric machine which has some ability to interact with physical objects and to be given electronic programming to do a specific task or to do a whole range of tasks or actions. It may also have some ability to perceive and absorb data on physical objects, or on its local physical environment, or to process data, or to respond to various stimuli. This is in contrast to a simple mechanical device such as a gear or a hydraulic press or any other item which has no processing ability and which does tasks through purely mechanical processes and motion.
Social impact
As robots have become more advanced and sophisticated, experts and academics have increasingly explored the questions of what ethics might govern robots' behavior, and whether robots might be able to claim any kind of social, cultural, ethical or legal rights. One scientific team has said that it is possible that a robot brain will exist by 2019. Others predict robot intelligence breakthroughs by 2050. Recent advances have made robotic behavior more sophisticated.
Vernor Vinge has suggested that a moment may come when computers and robots are smarter than humans. He calls this "the Singularity." He suggests that it may be somewhat or possibly very dangerous for humans. This is discussed by a philosophy called Singularitarianism.
In 2009, experts attended a conference to discuss whether computers and robots might be able to acquire any autonomy, and how much these abilities might pose a threat or hazard. They noted that some robots have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence." They noted that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls. Various media sources and scientific groups have noted separate trends in differing areas which might together result in greater robotic functionalities and autonomy, and which pose some inherent concerns.
Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions. There are also concerns about technology which might allow some armed robots to be controlled mainly by other robots. The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions. Some public concerns about autonomous robots have received media attention, especially one robot, EATR, which can continually refuel itself using biomass and organic substances which it finds on battlefields or other local environments.
The Association for the Advancement of Artificial Intelligence has studied this topic in depth and its president has commissioned a study to look at this issue.
Some have suggested a need to build "Friendly AI", meaning that the advances which are already occurring with AI should also include an effort to make AI intrinsically friendly and humane. Several such measures reportedly already exist, with robot-heavy countries such as Japan and South Korea having begun to pass regulations requiring robots to be equipped with safety systems, and possibly sets of 'laws' akin to Asimov's Three Laws of Robotics. An official report was issued in 2009 by the Japanese government's Robot Industry Policy Committee. Chinese officials and researchers have issued a report suggesting a set of ethical rules, as well as a set of new legal guidelines referred to as "Robot Legal Studies." Some concern has been expressed over a possible occurrence of robots telling apparent falsehoods.
Advantages: Increased productivity, accuracy, and endurance
Many factory jobs are now performed by robots. This has led to cheaper mass-produced goods, including automobiles and electronics. Stationary manipulators used in factories have become the largest market for robots. In 2006, there were an estimated 3,540,000 service robots in use, and an estimated 950,000 industrial robots. A different estimate counted more than one million robots in operation worldwide in the first half of 2008, with roughly half in Asia, 32% in Europe, 16% in North America, 1% in Australasia and 1% in Africa.
Some examples of factory robots:
§ Car production: Over the last three decades automobile factories have become dominated by robots. A typical factory contains hundreds of industrial robots working on fully automated production lines, with one robot for every ten human workers. On an automated production line, a vehicle chassis on a conveyor is welded, glued, painted and finally assembled at a sequence of robot stations.
§ Packaging: Industrial robots are also used extensively for palletizing and packaging of manufactured goods, for example for rapidly taking drink cartons from the end of a conveyor belt and placing them into boxes, or for loading and unloading machining centers.
§ Electronics: Mass-produced printed circuit boards (PCBs) are almost exclusively manufactured by pick-and-place robots, typically with SCARA manipulators, which remove tiny electronic components from strips or trays, and place them on to PCBs with great accuracy. Such robots can place hundreds of thousands of components per hour, far out-performing a human in speed, accuracy, and reliability.
Disadvantages: Fears and concerns about robots have been repeatedly expressed in a wide range of books and films. A common theme is the development of a master race of conscious and highly intelligent robots, motivated to take over or destroy the human race. Some fictional robots are programmed to kill and destroy; others gain superhuman intelligence and abilities by upgrading their own software and hardware. Another common theme is the reaction, sometimes called the "uncanny valley", of unease and even revulsion at the sight of robots that mimic humans too closely. Frankenstein (1818), often called the first science fiction novel, has become synonymous with the theme of a robot or monster advancing beyond its creator. In the TV show, Futurama, the robots are portrayed as humanoid figures that live alongside humans, not as robotic butlers. They still work in industry, but these robots carry out daily lives.
Manuel De Landa has noted that "smart missiles" and autonomous bombs equipped with artificial perception can be considered robots, and they make some of their decisions autonomously. He believes this represents an important and dangerous trend in which humans are handing over important decisions to machines.
Marauding robots may have entertainment value, but unsafe use of robots constitutes an actual danger. A heavy industrial robot with powerful actuators and unpredictably complex behavior can cause harm, for instance by stepping on a human's foot or falling on a human. Most industrial robots operate inside a security fence which separates them from human workers, but not all. Two robot-caused deaths are those of Robert Williams and Kenji Urada. Robert Williams was struck by a robotic arm at a casting plant in Flat Rock, Michigan on January 25, 1979. 37-year-old Kenji Urada, a Japanese factory worker, was killed in 1981. Urada was performing routine maintenance on the robot, but neglected to shut it down properly, and was accidentally pushed into a grinding machine.
Artificial Intelligence Programming for Video Games
Today it is almost impossible to write professional style games without using at least some aspects of artificial intelligence. Artificial intelligence (AI) is a useful tool to use to help to create characters that have a choice of responses to games player's actions, but have to be able to act in a fairly unpredictable fashion.
Video game artificial intelligence is a programming area that tries to make the computer act in a similar way to human intelligence. There are a number of underlying principles behind video game AI, the major one being that of having a rule based system whereby information and rules are entered into a database, and when the video game AI is faced with a situation, it finds appropriate information and acts on it according to the set of rules that apply to the situation. If the AI database is large enough, then there is sufficient unpredictability in the database to produce a simulation of human choice.
What’s more Game artificial intelligence refers to techniques used in computer and video games to produce the illusion of intelligence in the behavior of non-player characters(NPCs). The techniques used typically draw upon existing methods from the field of artificial intelligence (AI). However, the term game AI is often used to refer to a broad set of algorithms that also include techniques from control theory, robotics, computer graphics and computer science in general.
Since game AI is centered on appearance of intelligence and good game play, its approach is very different from that of traditional AI; hacks and cheats are acceptable and, in many cases, the computer abilities must be toned down to give human players a sense of fairness. This, for example, is true in first-person shooter games, where NPC's otherwise perfect movement and aiming would be beyond human skill.
Advantages: Game AI/heuristic algorithms are used in a wide variety of quite disparate fields inside a game. The most obvious is in the control of any NPCs in the game, although scripting is currently the most common means of control. Pathfinding is another common use for AI, widely seen in real-time strategy games. Pathfinding is the method for determining how to get an NPC from one point on a map to another, taking into consideration the terrain, obstacles and possibly "fog of war". Game AI is also involved with dynamic game difficulty balancing, which consists in adjusting the difficulty in a video game in real-time based on the player's ability.
Disadvantages: Cheating AI (also called Rubberband AI) is a term used to describe the situation where the AI has bonuses over the players, such as having more hit-points, driving faster, or ignoring fog of war. The use of cheating in AI shows the limitations of the "intelligence" achievable artificially; generally speaking, in games where strategic creativity is important, humans could easily beat the AI after a minimum of trial and error if it were not for the bonuses. In the context of AI programming, cheating refers only to any privilege given specifically to the AI; this does not include the inhuman swiftness and accuracy natural to a computer, although a player might call that "cheating".
One common example of cheating AI is found in many racing games. If an AI opponent falls far enough behind the rest of the drivers it receives a boost in speed or other attributes, enabling it to catch up and/or again become competitive. This technique is known as "rubber banding" because it allows the AI character to quickly snap back into a competitive position. A similar method is also used in sports games such as the Madden NFL series. In more advanced games, NPC competitiveness may be achieved through dynamic game difficulty balancing, which can be considered fairer though still technically a cheat.
Argument and Comparison:
The ongoing success of applied Artificial Intelligence and of cognitive simulation seems assured. However, strong AI, which aims to duplicate human intellectual abilities, remains controversial. The reputation of this area of research has been damaged over the years by exaggerated claims of success that have appeared both in the popular media and in the professional journals. At the present time, even an embodied system displaying the overall intelligence of a cockroach is proving elusive, let alone a system rivaling a human being.
The difficulty of "scaling up" AI's so far relatively modest achievements cannot be overstated. Five decades of research in symbolic AI has failed to produce any firm evidence that a symbol-system can manifest human levels of general intelligence. Critics of nouvelle AI regard as mystical the view that high-level behaviours involving language-understanding, planning, and reasoning will somehow "emerge" from the interaction of basic behaviours like obstacle avoidance, gaze control and object manipulation. Connectionists have been unable to construct working models of the nervous systems of even the simplest living things. Caenorhabditis elegans, a much-studied worm, has approximately 300 neurons, whose pattern of interconnections is perfectly known. Yet connectionist models have failed to mimic the worm's simple nervous system. The "neurons" of connectionist theory are gross oversimplifications of the real thing.
However, this lack of substantial progress may simply be testimony to the difficulty of strong AI, not to its impossibility.
Let me turn to the very idea of strong artificial intelligence. Can a computer possibly be intelligent, think and understand? Noam Chomsky suggests that debating this question is pointless, for it is a question of decision, not fact: decision as to whether to adopt a certain extension of common usage. There is, Chomsky claims, no factual question as to whether any such decision is right or wrong--just as there is no question as to whether our decision to say that airplanes fly is right, or our decision not to say that ships swim is wrong. However, Chomsky is oversimplifying matters. Of course we could, if we wished, simply decide to describe bulldozers, for instance, as things that fly. But obviously it would be misleading to do so, since bulldozers are not appropriately similar to the other things that we describe as flying. The important questions are: could it ever be appropriate to say that computers are intelligent, think, and understand, and if so, what conditions must a computer satisfy in order to be so described?
Some authors offer the Turing test as a definition of intelligence: a computer is intelligent if and only if the test fails to distinguish it from a human being. However, Turing himself in fact pointed out that his test cannot provide a definition of intelligence. It is possible, he said, that a computer which ought to be described as intelligent might nevertheless fail the test because it is not capable of successfully imitating a human being. For example, why should an intelligent robot designed to oversee mining on the moon necessarily be able to pass itself off in conversation as a human being? If an intelligent entity can fail the test, then the test cannot function as a definition of intelligence.
It is even questionable whether a computer's passing the test would show that the computer is intelligent. In 1956 Claude Shannon and John McCarthy raised the objection to the test that it is possible in principle to design a program containing a complete set of "canned" responses to all the questions that an interrogator could possibly ask during the fixed time-span of the test. Like Parry, this machine would produce answers to the interviewer's questions by looking up appropriate responses in a giant table. This objection--which has in recent years been revived by Ned Block, Stephen White, and me-seems to show that in principle a system with no intelligence at all could pass the Turing test.
In fact AI has no real definition of intelligence to offer, not even in the sub-human case. Rats are intelligent, but what exactly must a research team achieve in order for it to be the case that the team has created an artifact as intelligent as a rat?
In the absence of a reasonably precise criterion for when an artificial system counts as intelligent, there is no way of telling whether a research program that aims at producing intelligent artifacts has succeeded or failed. One result of AI's failure to produce a satisfactory criterion of when a system counts as intelligent is that whenever AI achieves one of its goals. For example, programs that can summarize newspaper articles, or beat the world chess champion-critics are able to say "That's not intelligence!" (even critics who have previously maintained that no computer could possibly do the thing in question).
Saying about comparison of these technologies we can ensure that Artificial Intelligence is not completely available for our existing in this modern life that we are living now. Scientists are developing AI year by year, hoping that AI will become the most powerful technology mankind ever seen. It is difficult to compare them all because nowadays they all have their special incompletely role in this global world. We face a lot of Robotics machines everywhere, and in every corner we see the usage of Video Games that are attracting children with its extra-ordinary features. Therefore, in all hospitals and medical buildings people can find a number of technologies which are being used by human beings, in the same time helping us to overcome complex tasks which could be risky being done by human. There are many benefits from all these machines than getting harm. But no one can say that it will lead us to perfection. Our mission is just wait and see, to what it will proceed…
The role of Islamic community in Science:
Among those honored are researchers in Japan, Italy and the Netherlands, a country with a population of just 16-million. Yet the list does not include a single noteworthy breakthrough in any of the world's 56 Muslim nations, encompassing more than 1-billion people.
"Religious fundamentalism is always bad news for science," Pervez Amirali Hoodbhoy, a Pakistani Muslim physicist, recently wrote in an article on Islam and science for Physics Today.
"Scientific progress constantly demands that facts and hypotheses be checked. But there lies the problem: The scientific method is alien to traditional, unreformed religious thought."
While the reasons are many and often controversial, there is no doubt that the Muslim world lags far behind in scientific achievement and research:
* Muslim countries contribute less than 2 percent of the world's scientific literature. Spain alone produces almost as many scientific papers.
* In countries with substantial Muslim populations, the average number of scientists, engineers and technicians per 1,000 people is 8.5. The world average is 40.
* Muslim countries get so few patents that they don't even register on a bar graph comparison with other countries. Of the more than 3-million foreign inventions patented in the United States between 1977 and 2004, only 1,500 were developed in Muslim nations.
* In a survey by the Times of London, just two Muslim universities -- both in cosmopolitan Malaysia -- ranked among the top 200 universities worldwide.
Two Muslim scientists have won Nobel Prizes, but both did their groundbreaking work at Western institutions. Pakistan's Abdus Salam, who won the 1979 physics prize while in Britain, was barred from speaking at any university in his own country.
Today, many of the brightest scientific minds leave their countries to study in Western universities like Virginia Tech and the Massachusetts Institute of Technology, both of which have sizeable Muslim student associations. By some estimates, more than half of the science students from Arab countries never return home to work.