Wednesday, September 2, 2009

Computer-based and web-based training

CBT, acronym for Computer-based Training, is a term commonly used for a means of education with no face-to-face interaction. This definition, however, is too general, as there are many branches of Computer-based Training which need certain specifications.

WBT, short for web-based training, is a term used for training delivered over the internet, or an intranet using a web browser. Web-based training includes static methods, such as streaming audio and video, hyperlinked web pages, live web broadcasts, and portals of information. Then we have interactive methods, such as bulletin boards, chat rooms, instant messaging, videoconferencing, and discussion threads.

It is very common for businesses to use web-based training to train employees. The instruction can be facilitated and placed by the trainer or self-directed and placed by the trainee.

History:

Historically, computer-based training faced many obstacles in breaking its way through popularity, due to the enormous resources required, human resources to create a CBT program, and hardware resources to run it. However, the increase in PC computing power, and the growing dominance of computers equipped with CD-ROMs, is making computer-based training a more reliable option for corporations and individuals alike.

Technology-enhanced learning:

This is very broad as a definition as well, as its evolving nature in the research field forces it to a continuous change. Technology-enhanced learning has the goal to provide socio-technical innovations; this field therefore describes the support of any learning activity through technology. Learning activities can often follow different pedagogical approaches; the main interest of technology-enhanced learning is to merge between these activities and modern technologies. This can range from enabling access to and authoring of a learning resource to elaborate software systems managing and managing process of learners with technical means.

Technology-enhanced learning has some very significant differences with E-learning, even though they are often perceived as synonymous. The main difference between the two is that technology-enhanced learning is only concerned with the technological support of pedagogical approaches that utilize technology.

Photoshop:

With many training programs available for sale, and the competition level rising high, Photoshop remains one of the best ever. To many illustrators, copyrighters, advertising executives and home users, Photoshop is irreplaceable. The E-learning industry therefore takes great care to make this training program in the best shape possible.

Self-paced adobe Photoshop training CD-ROMs:

These are designed for those who need self-scheduled classes at their own pace. The CD-ROMs consist of recorded classes, sample works, and plenty of material to enhance ones Photoshop skills.

Online Adobe Photoshop training:

These online courses include videos, demonstrations, interactive content and more. Could be self-paced and self guided as well, so the learner can control his/her own learning schedule.

Onsite adobe Photoshop training classes:

This feature gives learners the opportunity to get their questions answered on the spot, and seek any additional information whenever needed, as an online instructor is provided for guidance.

Linux Certification Training CBT:

For companies, Linux training is essential for employees, as this allows them to complete their jobs to the highest level of their abilities. Getting your company in touch with a series of CBT Linux training would be very productive and time consuming. We could think about how hard to achieve and financially exhaustive this would be if it weren’t for Computer-based Training, and that is one of this modern facility’s greatest advantages.

Basic Computer Training CBT:

For small business corporations, all your employees would need is basic computer knowledge, but you wouldn’t have all the time in the world to go teach them individually, nor would it be cost or time efficient hiring a trainer for such a purpose. Once again, this is where Computer-based training makes it all easy. With this course, your knowledge of Microsoft Word, Excel, Access, outlook, Adobe Photoshop, and much more, will be complete, for whatever purpose intended.

Other Courses:

We’ve provided only a few examples of thousands and thousands of Computer-based Training courses. The list is endless, with a range of fields containing ICT, Management, Science, Management, and much more being covered. This could be considered as a new era of Academic Education! We cannot say exactly which of these training are the most important because there are different kinds of people who are using one of these programs perfectly and can fulfill the required task with it. What’s more all those programs are used in the vary areas, that’s why we can make sure that all of them are somehow important everywhere regarding to your professionalism.

Islamic Perspective:

The Islamic perspective is clear on this matter, and does not need much debate to reach a settled conclusion. Knowledge is a core part of Islam, and seeking knowledge is obligatory to all Muslims. Therefore, the fundamental principle here is knowledge, and gaining knowledge in all fields in all the ways possible is considered as worship to Allah.

Now if we look specifically at Computer-based and Web-based Training, which is only a method of facilitating knowledge, we will find no opposition from Islam at all, on the contrary, it is probably praised, due to its’ serving one of the greatest forms of worship to Allah, seeking knowledge.

The only prohibition that we could find in modern technology would be methods of facilitating Haram acts. For example, if computer-based learning is used to train certain soldiers on how to use certain machinery which they will operate to massacre a whole City, the CBT course would definitely be prohibited! But that cannot be used as a basis to an argument that CBT is prohibited, as Islamic Jurisprudence bases its judgment on the assumption that everything is pure and permitted in its original state, and only filthy and prohibited when an act of evil is evolved. For example, a knife is permitted to use, unless the intention is to kill someone.

We could therefore come to a conclusion that computer-based and web-based training is a blessing that Allah will reward use for using if we do so with the right intention. Muslims must not only take advantage of modern technology in developing themselves, but offer research papers to play an active role in modern development, as we are a Nation ordered to excel in all fields.

Computer Technology


A computer is a machine that manipulates data according to a set of instructions.

Although mechanical examples of computers have existed through much of recorded human history, the first electronic computers were developed in the mid-20th century (1940–1945). These were the size of a large room, consuming as much power as several hundred modern personal computers (PCs). Modern computers based on integrated circuits are millions to billions of times more capable than the early machines, and occupy a fraction of the space.[2] Simple computers are small enough to fit into a wristwatch, and can be powered by a watch battery.Personal computers in their various forms are icons of the Information Age and are what most people think of as "computers". The embedded computers found in many devices from MP3 players to fighter aircraft and from toys to industrial robots are however the most numerous.

The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a certain minimum capability is, in principle, capable of performing the same tasks that any other computer can perform. Therefore computers ranging from a mobile phone to a supercomputer are all able to perform the same computational tasks, given enough time and storage capacity.

-History of computing


The Jacquard loom, on display at the Museum of Science and Industry in Manchester, England, was one of the first programmable devices.

The first use of the word "computer" was recorded in 1613, referring to a person who carried out calculations, or computations, and the word continued to be used in that sense until the middle of the 20th century. From the end of the 19th century onwards though, the word began to take on its more familiar meaning, describing a machine that carries out computations.

The history of the modern computer begins with two separate technologies—automated calculation and programmability—but no single device can be identified as the earliest computer, partly because of the inconsistent application of that term. Examples of early mechanical calculating devices include the abacus, theslide rule and arguably the astrolabe and the Antikythera mechanism (which dates from about 150–100 BC).Hero of Alexandria (c. 10–70 AD) built a mechanical theater which performed a play lasting 10 minutes and was operated by a complex system of ropes and drums that might be considered to be a means of deciding which parts of the mechanism performed which actions and when.

This is the essence of programmability.

The "castle clock", an astronomical clock invented by Al-Jazari in 1206, is considered to be the earliestprogrammable analog computer. It displayed the zodiac, the solar and lunar orbits, a crescent moon-shapedpointer travelling across a gateway causing automatic doors to open every hour, and five robotic musicians who played music when struck by levers operated by a camshaft attached to a water wheel. The length of day andnight could be re-programmed to compensate for the changing lengths of day and night throughout the year.

The Renaissance saw a re-invigoration of European mathematics and engineering. Wilhelm Schickard's 1623 device was the first of a number of mechanical calculators constructed by European engineers, but none fit the modern definition of a computer, because they could not be programmed.

In 1801, Joseph Marie Jacquard made an improvement to the textile loom by introducing a series of punched paper cards as a template which allowed his loom to weave intricate patterns automatically. The resulting Jacquard loom was an important step in the development of computers because the use of punched cards to define woven patterns can be viewed as an early, albeit limited, form of programmability.

It was the fusion of automatic calculation with programmability that produced the first recognizable computers. In 1837, Charles Babbage was the first to conceptualize and design a fully programmable mechanical computer, hisanalytical engine. Limited finances and Babbage's inability to resist tinkering with the design meant that the device was never completed.

In the late 1880s, Herman Hollerith invented the recording of data on a machine readable medium. Prior uses of machine readable media, above, had been for control, not data. "After some initial trials with paper tape, he settled on punched cards ..." To process these punched cards he invented the tabulator, and the keypunchmachines. These three inventions were the foundation of the modern information processing industry. Large-scale automated data processing of punched cards was performed for the 1890 United States Census by Hollerith's company, which later became the core of IBM. By the end of the 19th century a number of technologies that would later prove useful in the realization of practical computers had begun to appear: the punched card, Boolean algebra, the vacuum tube (thermionic valve) and the teleprinter.

During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticatedanalog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers.

Alan Turing is widely regarded to be the father of modern computer science. In 1936 Turing provided an influential formalisation of the concept of the algorithm and computation with the Turing machine. Of his role in the modern computer, Time Magazine in naming Turing one of the 100 most influential people of the 20th century, states: "The fact remains that everyone who taps at a keyboard, opening a spreadsheet or a word-processing program, is working on an incarnation of a Turing machine."

telecommunications technology. This effort was funded by ARPA (now DARPA), and thecomputer network that it produced was called the ARPANET. The technologies that made the Arpanet possible spread and evolved.

In time, the network spread beyond academic and military institutions and became known as the Internet. The emergence of networking involved a redefinition of the nature and boundaries of the computer. Computer operating systems and applications were modified to include the ability to define and access the resources of other computers on the network, such as peripheral devices, stored information, and the like, as extensions of the resources of an individual computer. Initially these facilities were available primarily to people working in high-tech environments, but in the 1990s the spread of applications like e-mail and the World Wide Web, combined with the development of cheap, fast networking technologies like Ethernet and ADSLsaw computer networking become almost ubiquitous. In fact, the number of computers that are networked is growing phenomenally. A very large proportion of personal computers regularly connect to the Internet to communicate and receive information. "Wireless" networking, often utilizing mobile phone networks, has meant networking is becoming increasingly ubiquitous even in mobile computing environments.

Examples computer:

1. Supercomputer and Mainframe

Supercomputer is a broad term for one of the fastest computers currently available. Supercomputers are very expensive and are employed for specialized applications that require immense amounts of mathematical calculations (number crunching). For example, weather forecasting requires a supercomputer. Other uses of supercomputers scientific simulations, (animated) graphics, fluid dynamic calculations, nuclear energy research, electronic design, and analysis of geological data (e.g. in petrochemical prospecting). Perhaps the best known supercomputer manufacturer is Cray Research.

Mainframe was a term originally referring to the cabinet containing the central processor unit or "main frame" of a room-filling Stone Age batch machine. After the emergence of smaller "minicomputer" designs in the early 1970s, the traditional big iron machines were described as "mainframe computers" and eventually just as mainframes. Nowadays a Mainframe is a very large and expensive computer capable of supporting hundreds, or even thousands, of users simultaneously. The chief difference between a supercomputer and a mainframe is that a supercomputer channels all its power into executing a few programs as fast as possible, whereas a mainframe uses its power to execute many programs concurrently. In some ways, mainframes are more powerful than supercomputers because they support more simultaneous programs. But supercomputers can execute a single program faster than a mainframe. The distinction between small mainframes and minicomputers is vague, depending really on how the manufacturer wants to market its machines.

2. Minicomputer

It is a midsize computer. In the past decade, the distinction between large minicomputers and small mainframes has blurred, however, as has the distinction between small minicomputers and workstations. But in general, a minicomputer is a multiprocessing system capable of supporting from up to 200 users simultaneously.

3. Workstation

It is a type of computer used for engineering applications (CAD/CAM), desktop publishing, software development, and other types of applications that require a moderate amount of computing power and relatively high quality graphics capabilities. Workstations generally come with a large, high-resolution graphics screen, at large amount of RAM, built-in network support, and a graphical user interface. Most workstations also have a mass storage device such as a disk drive, but a special type of workstation, called a diskless workstation, comes without a disk drive. The most common operating systems for workstations are UNIX and Windows NT. Like personal computers, most workstations are single-user computers. However, workstations are typically linked together to form a local-area network, although they can also be used as stand-alone systems.

N.B.: In networking, workstation refers to any computer connected to a local-area network. It could be a workstation or a personal computer.

4. Personal computer:

It can be defined as a small, relatively inexpensive computer designed for an individual user. In price, personal computers range anywhere from a few hundred pounds to over five thousand pounds. All are based on the microprocessor technology that enables manufacturers to put an entire CPU on one chip. Businesses use personal computers for word processing, accounting, desktop publishing, and for running spreadsheet and database management applications. At home, the most popular use for personal computers is for playing games and recently for surfing the Internet.

Personal computers first appeared in the late 1970s. One of the first and most popular personal computers was the Apple II, introduced in 1977 by Apple Computer. During the late 1970s and early 1980s, new models and competing operating systems seemed to appear daily. Then, in 1981, IBM entered the fray with its first personal computer, known as the IBM PC. The IBM PC quickly became the personal computer of choice, and most other personal computer manufacturers fell by the wayside. P.C. is short for personal computer or IBM PC. One of the few companies to survive IBM's onslaught was Apple Computer, which remains a major player in the personal computer marketplace. Other companies adjusted to IBM's dominance by building IBM clones, computers that were internally almost the same as the IBM PC, but that cost less. Because IBM clones used the same microprocessors as IBM PCs, they were capable of running the same software. Over the years, IBM has lost much of its influence in directing the evolution of PCs. Therefore after the release of the first PC by IBM the term PC increasingly came to mean IBM or IBM-compatible personal computers, to the exclusion of other types of personal computers, such as Macintoshes. In recent years, the term PC has become more and more difficult to pin down. In general, though, it applies to any personal computer based on an Intel microprocessor, or on an Intel-compatible microprocessor. For nearly every other component, including the operating system, there are several options, all of which fall under the rubric of PC

Today, the world of personal computers is basically divided between Apple Macintoshes and PCs. The principal characteristics of personal computers are that they are single-user systems and are based on microprocessors. However, although personal computers are designed as single-user systems, it is common to link them together to form a network. In terms of power, there is great variety. At the high end, the distinction between personal computers and workstations has faded. High-end models of the Macintosh and PC offer the same computing power and graphics capability as low-end workstations by Sun Microsystems, Hewlett-Packard, and DEC.

Personal Computer Types

Actual personal computers can be generally classified by size and chassis / case. The chassis or case is the metal frame that serves as the structural support for electronic components. Every computer system requires at least one chassis to house the circuit boards and wiring. The chassis also contains slots for expansion boards. If you want to insert more boards than there are slots, you will need an expansion chassis, which provides additional slots. There are two basic flavors of chassis designs–desktop models and tower models–but there are many variations on these two basic types. Then come the portable computers that are computers small enough to carry. Portable computers include notebook and subnotebook computers, hand-held computers, palmtops, and PDAs.

5. Tower model

The term refers to a computer in which the power supply, motherboard, and mass storage devices are stacked on top of each other in a cabinet. This is in contrast to desktop models, in which these components are housed in a more compact box. The main advantage of tower models is that there are fewer space constraints, which makes installation of additional storage devices easier.

6. Desktop model

A computer designed to fit comfortably on top of a desk, typically with the monitor sitting on top of the computer. Desktop model computers are broad and low, whereas tower model computers are narrow and tall. Because of their shape, desktop model computers are generally limited to three internal mass storage devices. Desktop models designed to be very small are sometimes referred to as slimline models.

7. Notebook computer

An extremely lightweight personal computer. Notebook computers typically weigh less than 6 pounds and are small enough to fit easily in a briefcase. Aside from size, the principal difference between a notebook computer and a personal computer is the display screen. Notebook computers use a variety of techniques, known as flat-panel technologies, to produce a lightweight and non-bulky display screen. The quality of notebook display screens varies considerably. In terms of computing power, modern notebook computers are nearly equivalent to personal computers. They have the same CPUs, memory capacity, and disk drives. However, all this power in a small package is expensive. Notebook computers cost about twice as much as equivalent regular-sized computers. Notebook computers come with battery packs that enable you to run them without plugging them in. However, the batteries need to be recharged every few hours.

8. Laptop computer

A small, portable computer -- small enough that it can sit on your lap. Nowadays, laptop computers are more frequently called notebook computers.

9. Subnotebook computer

A portable computer that is slightly lighter and smaller than a full-sized notebook computer. Typically, subnotebook computers have a smaller keyboard and screen, but are otherwise equivalent to notebook computers.

10. Hand-held computer

A portable computer that is small enough to be held in one’s hand. Although extremely convenient to carry, handheld computers have not replaced notebook computers because of their small keyboards and screens. The most popular hand-held computers are those that are specifically designed to provide PIM (personal information manager) functions, such as a calendar and address book. Some manufacturers are trying to solve the small keyboard problem by replacing the keyboard with an electronic pen. However, these pen-based devices rely on handwriting recognition technologies, which are still in their infancy. Hand-held computers are also called PDAs, palmtops and pocket computers.

11. Palmtop

A small computer that literally fits in your palm. Compared to full-size computers, palmtops are severely limited, but they are practical for certain functions such as phone books and calendars. Palmtops that use a pen rather than a keyboard for input are often called hand-held computers or PDAs. Because of their small size, most palmtop computers do not include disk drives. However, many contain PCMCIA slots in which you can insert disk drives, modems, memory, and other devices. Palmtops are also called PDAs, hand-held computers and pocket computers.

12. PDA

Short for personal digital assistant, a handheld device that combines computing, telephone/fax, and networking features. A typical PDA can function as a cellular phone, fax sender, and personal organizer. Unlike portable computers, most PDAs are pen-based, using a stylus rather than a keyboard for input. This means that they also incorporate handwriting recognition features. Some PDAs can also react to voice input by using voice recognition technologies. The field of PDA was pioneered by Apple Computer, which introduced the Newton MessagePad in 1993. Shortly thereafter, several other manufacturers offered similar products. To date, PDAs have had only modest success in the marketplace, due to their high price tags and limited applications. However, many experts believe that PDAs will eventually become common gadgets.

Supercomputer and Mainframe.

Supercomputer…..

· Supercomputer it handles large amount of scientific data.

· A supercomputer generates large amounts of heat and must be cooled. Cooling most supercomputers is a major HVAC problem.

· Information cannot move faster than the speed of light between two parts of a supercomputer. For this reason, a supercomputer that is many metres across must have latencies between its components measured at least in the tens of nanoseconds.

· Supercomputers consume and produce massive amounts of data in a very short period of time

· A supercomputer is a device for turning compute-bound problems into I/O-bound problems." Much work on external storage bandwidth is needed to ensure that this information can be transferred quickly and stored/retrieved correctly.

Mainframe…..

13. Mainframe, it is used for handling big organization.

1. 90% of IBM's mainframes have CICS transaction processing software installed.[8] Other software staples include the IMS and DB2 databases, and WebSphere MQ andWebSphere Application Server middleware.

1. As of 2004, IBM claimed over 200 new (21st century) mainframe customers — customers that had never previously owned a mainframe.

1. Most mainframes run continuously at over 70% busy. A 90% figure is typical, and modern mainframes tolerate sustained periods of 100% CPU utilization, queuing work according to business priorities without disrupting ongoing execution.[citation needed]

1. Mainframes have a historical reputation for being "expensive," but the modern reality is much different. As of late 2006, it is possible to buy and configure a complete IBM mainframe system (with software, storage, and support), under standard commercial use terms, for about $50,000 (U.S.). The price of z/OS starts at about $1,500 (U.S.) per year, including 24x7 telephone and Web support.[9]

1. In the unlikely event a mainframe needs repair, it is typically repaired without interruption to running applications. Also, memory, storage and processor modules of chips can be added or hot swapped without interrupting applications. It is not unusual for a mainframe to be continuously switched on for months or years at a stretch

Comparison between Supercomputer and Mainframe.

1. Both types of systems offer parallel processing, although this has not always been the case. Parallel processing (i.e., multiple CPU's executing instructions simultaneously) was used in supercomputers (e.g., the Cray-1) for decades before this feature appeared in mainframes, primarily due to cost at that time. Supercomputers typically expose parallel processing to the programmer in complex manners, while mainframes typically use it to run multiple tasks. One result of this difference is that adding processors to a mainframe often speeds up the entire workload transparently.

1. Supercomputers are optimized for complex computations that take place largely in memory, while mainframes are optimized for comparatively simple computations involving huge amounts of external data. For example, weather forecasting is suited to supercomputers, and insurance business or payroll processing applications are more suited to mainframes.

1. Supercomputers are often purpose-built for one or a very few specific institutional tasks (e.g. simulation and modeling). Mainframes typically handle a wider variety of tasks (e.g. data processing, warehousing). Consequently, most supercomputers can be one-off designs, whereas mainframes typically form part of a manufacturer's standard model lineup.

1. Mainframes tend to have numerous ancillary service processors assisting their main central processors (for cryptographic support, I/O handling, monitoring, memory handling, etc.) so that the actual "processor count" is much higher than would otherwise be obvious. Supercomputer design tends not to include as many service processors since they don't appreciably add to raw number-crunching power. This distinction is perhaps blurring over time as Moore's Law constraints encourage more specialization in server components.

1. Mainframes are exceptionally adept at batch processing, such as billing, owing to their heritage, decades of increasing customer expectations for batch improvements, and throughput-centric design. Supercomputers generally perform quite poorly in batch processing.

2. Ethic Value

What is Computer Ethics?

* This article first appeared in Terrell Ward Bynum, ed., Computers & Ethics, Blackwell, 1985, pp.266 – 75. (A special issue of the journal Metaphilosophy.)

James H. Moor…………

A Proposed Definition

Computers are special technology and they raise some special ethical issues. In this essay I will discuss what makes computers different from other technology and how this difference makes a difference in ethical considerations. In particular, I want to characterize computer ethics and show why this emerging field is both intellectually interesting and enormously important.

On my view, computer ethics is the analysis of the nature and social impact of computer technology and the corresponding formulation and justification of policies for the ethical use of such technology. I use the phrase “computer technology” because I take the subject matter of the field broadly to include computers and associated technology. For instance, I include concerns about software as well as hardware and concerns about networks connecting computers as well as computers themselves.

A typical problem in computer ethics arises because there is a policy vacuum about how computer technology should be used. Computers provide us with new capabilities and these in turn give us new choices for action. Often, either no policies for conduct in these situations exist or existing policies seem inadequate. A central task of computer ethics is to determine what we should do in such cases, i.e., to formulate policies to guide our actions. Of course, some ethical situations confront us as individuals and some as a society. Computer ethics includes consideration of both personal and social policies for the ethical use of computer technology.

Now it may seem that all that needs to be done is the mechanical application of an ethical theory to generate the appropriate policy. But this is usually not possible. A difficulty is that along with a policy vacuum there is often a conceptual vacuum. Although a problem in computer ethics may seem clear initially, a little reflection reveals a conceptual muddle. What is needed in such cases is an analysis which provides a coherent conceptual framework within which to formulate a policy for action. Indeed, much of the important work in computer ethics is devoted to proposing conceptual frameworks for understanding ethical problems involving computer technology.

An example may help to clarify the kind of conceptual work that is required. Let’s suppose we are trying to formulate a policy for protecting computer programs. Initially, the idea may seem clear enough. We are looking for a policy for protecting a kind of intellectual property. But then a number of questions which do not have obvious answers emerge. What is a computer program? Is it really intellectual property which can be owned or is it more like an idea, an algorithm, which is not owned by anybody? If a computer program is intellectual property, is it anexpression of an idea that is owned (traditionally protectable by copyright) or is it a process that is owned (traditionally protectable by patent)? Is a machine-readable program a copy of a human-readable program? Clearly, we need a conceptualization of the nature of a computer program in order to answer these kinds of questions. Moreover, these questions must be answered in order to formulate a useful policy for protecting computer programs. Notice that the conceptualization we pick will not only affect how a policy will be applied but to a certain extent what the facts are. For instance, in this case the conceptualization will determine when programs count as instances of the same program.

Even within a coherent conceptual framework, the formulation of a policy for using computer technology can be difficult. As we consider different policies we discover something about what we value and what we don’t. Because computer technology provides us with new possibilities for acting, new values emerge. For example, creating software has value in our culture which it didn’t have a few decades ago. And old values have to be reconsidered. For instance, assuming software is intellectual property, why should intellectual property be protected? In general, the consideration of alternative policies forces us to discover and make explicit what our value preferences are.

The mark of a basic problem in computer ethics is one in which computer technology is essentially involved and there is an uncertainty about what to do and even about how to understand the situation. Hence, not all ethical situations involving computers are central to computer ethics. If a burglar steals available office equipment including computers, then the burglar has done something legally and ethically wrong. But this is really an issue for general law and ethics. Computers are only accidentally involved in this situation, and there is no policy or conceptual vacuum to fill. The situation and the applicable policy are clear.

In one sense I am arguing for the special status of computer ethics as a field of study. Applied ethics is not simply ethics applied. But, I also wish to stress the underlying importance of general ethics and science to computer ethics. Ethical theory provides categories and procedures for determining what is ethically relevant. For example, what kinds of things are good? What are our basic rights? What is an impartial point of view? These considerations are essential in comparing and justifying policies for ethical conduct. Similarly, scientific information is crucial in ethical evaluations. It is amazing how many times ethical disputes turn not on disagreements about values but on disagreements about facts.

On my view, computer ethics is a dynamic and complex field of study which considers the relationships among facts, conceptualizations, policies and values with regard to constantly changing computer technology. Computer ethics is not a fixed set of rules which one shellacs and hangs on the wall. Nor is computer ethics the rote application of ethical principles to a value-free technology. Computer ethics requires us to think anew about the nature of computer technology and our values. Although computer ethics is a field between science and ethics and depends on them, it is also a discipline in its own right which provides both conceptualizations for understanding and policies for using computer technology.

Though I have indicated some of the intellectually interesting features of computer ethics, I have not said much about the problems of the field or about its practical importance. The only example I have used so far is the issue of protecting computer programs which may seem to be a very narrow concern. In fact, I believe the domain of computer ethics is quite large and extends to issues which affect all of us. Now I want to turn to a consideration of these issues and argue for the practical importance of computer ethics. I will proceed not by giving a list of problems but rather by analyzing the conditions and forces which generate ethical issues about computer technology. In particular, I want to analyze what is special about computers, what social impact computers will have, and what is operationally suspect about computing technology. I hope to show something of the nature of computer ethics by doing some computer ethics.

By James H. Moor.

Group Members

Faycal Aliou : 0730187
Aziz Maraimov : 0823209
Abdulrahman El- Shayeb : 0824803
Mohamad Farook : 0824517

Artificial Intelligence

Mechanical or "formal" reasoning has been developed by philosophers and mathematicians since antiquity. The study of logic led directly to the invention of the programmable digital electronic computer, based on the work of mathematician Alan Turing and others. Turing's theory of computation suggested that a machine, by shuffling symbols as simple as "0" and "1", could simulate any conceivable act of mathematical deduction. This, along with recent discoveries in neurology, information theory and cybernetics, inspired a small group of researchers to begin to seriously consider the possibility of building an electronic brain.

The field of AI research was founded at a conference on the campus of Dartmouth College in the summer of 1956. The attendees would become the leaders of AI research for many decades, especially John McCarthy, Marvin Minsky, Allen Newell and Herbert Simon, who founded AI laboratories at MIT, CMU and Stanford. By 1965, research was also underway in England, led by Donald Michie, who founded a similar laboratory at the University of Edinburgh. These laboratories produced programs that were, to most people, simply astonishing: computers were solving word problems in algebra, proving logical theorems and speaking English. By the middle 60s AI was heavily funded by the U.S. Department of Defense[29] and many were optimistic about the future of the field. Herbert Simon predicted that "machines will be capable, within twenty years, of doing any work a man can do" and Marvin Minsky agreed, writing that "within a generation ... the problem of creating 'artificial intelligence' will substantially be solved".

Deduction, reasoning, problem solving

Early AI researchers developed algorithms that imitated the step-by-step reasoning that human beings use when they solve puzzles, play board games or make logical deductions. By the late 80s and 90s, AI research had also developed highly successful methods for dealing with uncertain or incomplete information, employing concepts from probability and economics.

For difficult problems, most of these algorithms can require enormous computational resources — most experience a "combinatorial explosion": the amount of memory or computer time required becomes astronomical when the problem goes beyond a certain size. The search for more efficient problem solving algorithms is a high priority for AI research.

Human beings solve most of their problems using fast, intuitive judgments rather than the conscious, step-by-step deduction that early AI research was able to model. AI has made some progress at imitating this kind of "sub-symbolic" problem solving: embodied approaches emphasize the importance of sensorimotor skills to higher reasoning; neural net research attempts to simulate the structures inside human and animal brains that gives rise to this skill.

Knowledge representation

Main articles: Knowledge representation and Commonsense knowledge

Knowledge representation and knowledge engineering are central to AI research. Many of the problems machines are expected to solve will require extensive knowledge about the world. Among the things that AI needs to represent are: objects, properties, categories and relations between objects; situations, events, states and time; causes and effects; knowledge about knowledge (what we know about what other people know); and many other, less well researched domains. A complete representation of "what exists" is an ontology (borrowing a word from traditional philosophy), of which the most general are called upper ontologies.

Planning

Intelligent agents must be able to set goals and achieve them. They need a way to visualize the future (they must have a representation of the state of the world and be able to make predictions about how their actions will change it) and be able to make choices that maximize the utility (or "value") of the available choices.

In classical planning problems, the agent can assume that it is the only thing acting on the world and it can be certain what the consequences of its actions may be. However, if this is not true, it must periodically check if the world matches its predictions and it must change its plan as this becomes necessary, requiring the agent to reason under uncertainty.

Learning

Machine learning has been central to AI research from the beginning. Unsupervised learning is the ability to find patterns in a stream of input. Supervised learning includes both classification and numerical regression. Classification is used to determine what category something belongs in, after seeing a number of examples of things from several categories. Regression takes a set of numerical input/output examples and attempts to discover a continuous function that would generate the outputs from the inputs. In reinforcement learning the agent is rewarded for good responses and punished for bad ones. These can be analyzed in terms of decision theory, using concepts like utility. The mathematical analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory.

Natural language processing

Natural language processing gives machines the ability to read and understand the languages that the human beings speak. Many researchers hope that a sufficiently powerful natural language processing system would be able to acquire knowledge on its own, by reading the existing text available over the internet. Some straightforward applications of natural language processing include information retrieval (or text mining) and machine translation.

Perception

Machine perception is the ability to use input from sensors (such as cameras, microphones, sonar and others more exotic) to deduce aspects of the world. Computer vision is the ability to analyze visual input. A few selected sub-problems are speech recognition, facial recognition and object recognition.

Social intelligence

Emotion and social skills play two roles for an intelligent agent. First, it must be able to predict the actions of others, by understanding their motives and emotional states. (This involves elements of game theory, decision theory, as well as the ability to model human emotions and the perceptual skills to detect emotions.) Also, for good human-computer interaction, an intelligent machine also needs to display emotions. At the very least it must appear polite and sensitive to the humans it interacts with. At best, it should have normal emotions itself.

General intelligence

Most researchers hope that their work will eventually be incorporated into a machine with general intelligence (known as strong AI), combining all the skills above and exceeding human abilities at most or all of them. A few believe that anthropomorphic features like artificial consciousness or an artificial brain may be required for such a project.

Many of the problems above are considered AI-complete: to solve one problem, you must solve them all. For example, even a straightforward, specific task like machine translation requires that the machine follow the author's argument (reason), know what is being talked about (knowledge), and faithfully reproduce the author's intention (social intelligence). Machine translation, therefore, is believed to be AI-complete: it may require strong AI to be done as well as humans can do it.

Approaches

There is no established unifying theory or paradigm that guides AI research. Researchers disagree about many issues. A few of the most long standing questions that have remained unanswered are these: should artificial intelligence simulate natural intelligence, by studying psychology or neurology? Or is human biology as irrelevant to AI research as bird biology is to aeronautical engineering? Can intelligent behavior be described using simple, elegant principles (such as logic or optimization)? Or does it necessarily require solving a large number of completely unrelated problems? Can intelligence be reproduced using high-level symbols, similar to words and ideas? Or does it require "sub-symbolic" processing?

Search and optimization

Main articles: Search algorithm, Optimization (mathematics), and Evolutionary computation

Many problems in AI can be solved in theory by intelligently searching through many possible solutions: Reasoning can be reduced to performing a search. For example, logical proof can be viewed as searching for a path that leads from premises to conclusions, where each step is the application of an inference rule. Planning algorithms search through trees of goals and subgoals, attempting to find a path to a target goal, a process called means-ends analysis. Robotics algorithms for moving limbs and grasping objects use local searches in configuration space. Many learning algorithms use search algorithms based on optimization.

Simple exhaustive searches are rarely sufficient for most real world problems: the search space (the number of places to search) quickly grows to astronomical numbers. The result is a search that is too slow or never completes. The solution, for many problems, is to use "heuristics" or "rules of thumb" that eliminate choices that are unlikely to lead to the goal (called "pruning the search tree"). Heuristics supply the program with a "best guess" for what path the solution lies on.

A very different kind of search came to prominence in the 1990s, based on the mathematical theory of optimization. For many problems, it is possible to begin the search with some form of a guess and then refine the guess incrementally until no more refinements can be made. These algorithms can be visualized as blind hill climbing: we begin the search at a random point on the landscape, and then, by jumps or steps, we keep moving our guess uphill, until we reach the top. Other optimization algorithms are simulated annealing, beam search and random optimization.[100]

Evolutionary computation uses a form of optimization search. For example, they may begin with a population of organisms (the guesses) and then allow them to mutate and recombine, selecting only the fittest to survive each generation (refining the guesses). Forms of evolutionary computation include swarm intelligence algorithms (such as ant colony or particle swarm optimization) and evolutionary algorithms (such as genetic algorithms and genetic programming[).

Logic

Logic was introduced into AI research by John McCarthy in his 1958 Advice Taker proposal. Logic is used for knowledge representation and problem solving, but it can be applied to other problems as well. For example, the satplan algorithm uses logic for planning and inductive logic programming is a method for learning.

There are several different forms of logic used in AI research. Propositional or sentential logic is the logic of statements which can be true or false. First-order logic also allows the use of quantifiers and predicates, and can express facts about objects, their properties, and their relations with each other. Fuzzy logic, a version of first-order logic which allows the truth of a statement to be represented as a value between 0 and 1, rather than simply True (1) or False (0). Fuzzy systems can be used for uncertain reasoning and have been widely used in modern industrial and consumer product control systems. Default logics, non-monotonic logics and circumscription are forms of logic designed to help with default reasoning and the qualification problem. Several extensions of logic have been designed to handle specific domains of knowledge, such as: description logics; situation calculus, event calculus and fluent calculus (for representing events and time); causal calculus; belief calculus; and modal logics.

In 1963, J. Alan Robinson discovered a simple, complete and entirely algorithmic method for logical deduction which can easily be performed by digital computers. However, a naive implementation of the algorithm quickly leads to a combinatorial explosion or an infinite loop. In 1974, Robert Kowalski suggested representing logical expressions as Horn clauses (statements in the form of rules: "if p then q"), which reduced logical deduction to backward chaining or forward chaining. This greatly alleviated (but did not eliminate) the problem.

In addition to the subject areas mentioned above, significant work in artificial intelligence has been done on puzzles and reasoning tasks, induction and concept identification, symbolic mathematics, theorem proving in formal logic, natural language understanding and generation, vision, robotics, chemistry, biology, engineering analysis, computer-assisted instruction, and computer-program synthesis and verification, to name only the most prominent. As computers become smaller and less expensive, more and more intelligence is built into automobiles, appliances, and other machines, as well as computer software, in everyday use.

Artificial Intelligence in Medical Diagnosis

In an attempt to overcome limitations inherent in conventional computer-aided diagnosis, investigators have created programs that simulate expert human reasoning. Hopes that such a strategy would lead to clinically useful programs have not been fulfilled, but many of the problems impeding creation of effective artificial intelligence programs have been solved. Strategies have been developed to limit the number of hypotheses that a program must consider and to incorporate path physiologic reasoning. The latter innovation permits a program to analyze cases in which one disorder influences the presentation of another. Prototypes embodying such reasoning can explain their conclusions in medical terms that can be reviewed by the user. Despite these advances, further major research and developmental efforts will be necessary before expert performance by the computer becomes a reality.

We will focus on how improved representations of clinical knowledge and sophisticated problem-solving strategies have advanced the field of artificial intelligence in medicine. Our purpose is to provide an overview of artificial intelligence in medicine to the physician who has had little contact with computer science. We will not concentrate on individual programs; rather, we will draw on the key insights of such programs to create a coherent picture of artificial intelligence in medicine and the promising directions in which the field is moving. We will therefore describe the behavior not of a single existing program but the approach taken by one or another of the many programs to which we refer. It remains an important challenge to combine successfully the best characteristics of these programs to build effective computer-based medical expert systems. Several collections of papers (19-21) provide detailed descriptions of the programs on which our analysis is based.

Function: Clinical Problem-Solving

Any program designed to serve as a consultant to the physician must contain certain basic features. It must have a store of medical knowledge expressed as descriptions of possible diseases. Depending on the breadth of the clinical domain, the number of hypotheses in the database can range from a few to many thousands. In the simplest conceivable representation of such knowledge, each disease hypothesis identifies all of the features that can occur in the particular disorder. In addition, the program must be able to match what is known about the patient with its store of information. Even the most sophisticated programs typically depend on this basic strategy.

The simplest version of such programs operates in the following fashion when presented with the chief complaint and when later given additional facts.

1. For each possible disease (diagnosis) determine whether the given findings are to be expected.

2. Score each disease (diagnosis) by counting the number of given findings that would have been expected.

3. Rank-order the possible diseases (diagnoses) according to their scores.

The power of such a simple program can be greatly enhanced through the use of a mechanism that poses questions designed to elicit useful information. Take, for example, an expansion of the basic program by the following strategy:

4. Select the highest-ranking hypothesis and ask whether one of the features of that disease, not yet considered, is present or absent.

5. If inquiry has been made about all possible features of the highest-ranked hypothesis, ask about the features of the next best hypothesis.

6. If a new finding is offered, begin again with step 1; otherwise, print out the rank-ordered diagnoses and their respective supportive findings and stop.

Advantages: Programs using artificial intelligence techniques have several major advantages over programs using more traditional methods. These programs have a greater capacity to quickly narrow the number of diagnostic possibilities, they can effectively use path physiologic reasoning, and they can create models of a specific patient's illness. Such models can even capture the complexities created by several disease states that interact and overlap. These programs can also explain in a straightforward manner how particular conclusions have been reached. This latter ability promises to be of critical importance when expert systems become available for day-to-day use; unless physicians can assess the validity of a program's conclusions, they cannot rely on the computer as a consultant. Indeed, a recent survey has shown that a program's ability to explain its reasoning is considered by clinicians to be more important than its ability to arrive consistently at the correct diagnosis. An explanatory capability will also be required by those responsible for correcting errors or modifying programs; as programs become larger and more complicated, no one will be able to penetrate their complexity without help from the programs themselves.

Disadvantages: Most approaches to computer-assisted diagnosis have, until the past few years, been based on one of three strategies-flow charts, statistical pattern-matching, or probability theory. All three techniques have been successfully applied to narrow medical domains, but each has serious drawbacks when applied to broad areas of clinical medicine. Flow charts quickly become unmanageably large. Further, they are unable to deal with uncertainty, a key element in most serious diagnostic problems. Probabilistic methods and statistical pattern-matching are typically incorporate unwarranted assumptions, such as that the set of diseases under consideration is exhaustive, that the diseases under suspicion are mutually exclusive, or that each clinical finding occurs independently of all others . In theory, these problems could be avoided by establishing a database of probabilities that copes with all possible interactions. But gathering and maintaining such a massive database would be a nearly impossible task. Moreover, all programs that rely solely on statistical techniques ignore causality of disease and thus cannot explain to the physician their reasoning processes or how they reach their diagnostic conclusions.

Cybernetics is the science of control. Its name, appropriately suggested by the mathematician Norbert Wiener (1894-1964), is derived from the Greek for ‘steersman’, pointing to the essence of cybernetics as the study and design of devices for maintaining stability, or for homing in on a goal or target. Its central concept is feedback. Since the ‘devices’ may be living or man-made, cybernetics bridges biology and engineering.

Stability of the human body is achieved by its static geometry and, very differently, by its dynamic control. A statue of a human being has to have a large base or it topples over. It falls when the centre of mass is vertically outside the base of the feet. Living people make continuous corrections to maintain themselves standing. Small deviations of posture are signaled by sensory signals (proprioception) from nerve fibers in the muscles and around the joint capsules of the ankles and legs, and by the otoliths (the organs of balance in the inner ear). Corrections of posture are the result of dynamic feedback from these senses, to maintain dynamic stability. When walking towards a target, such as the door of a room, deviations from the path are noted, mainly visually, and corrected from time to time during the movement, until the goal is reached. The key to this process is continuous correction of the output system by signals representing detected errors of the output, known as ‘negative feedback’. The same principle, often called servo-control, is used in engineering, in order to maintain the stability of machinery and to seek and find goals, with many applications such as guided missiles and autopilots.

The principles of feedback apply to the body's regulation of temperature, blood pressure, and so on. Though the principles are essentially the same as in engineering, for living organisms dynamic stability by feedback is often called ‘homeostasis’, following W. B. Cannon's pioneering book The wisdom of the body (1932). In the history of engineering, there are hints of the principle back to ancient Greek devices, such as self-regulating oil lamps. From the middle Ages the tail vane of windmills, continuously steering the sails into the veering wind, are well-known early examples of guidance by feedback. A more sophisticated system reduced the weight of the upper grinding stone when the wind fell, to keep the mill operating optimally in changing conditions. Servo-systems using feedback can make machines remarkably life-like. The first feedback device to be mathematically described was the rotary governor, used by James Watt to keep the rate of steam engines constant with varying loads.

Servo-systems suffer characteristic oscillations when the output overshoots the target, as occurs when the feedback is too slow or too weak to correct the output. Changing the ‘loop gain’ (i.e. the magnitude of correction resulting from a particular feedback signal) increases tremor for machines and organisms. It is tempting to believe that ‘intention tremor’ of patients who have suffered damage to the cerebellum is caused by a change in the characteristics of servo control.

Dynamic control requires the transmission of information. Concepts of information are included in cybernetics, especially following Claud Shannon's important mathematical analysis in 1949. It does not, however, cover digital computing. Cybernetic systems are usually analogue, and computing is described with very different concepts. Early Artificial Intelligence (AI) was analogue-based (reaching mental goals by correcting abstract errors) and there has recently been a return to analogue computing systems, with self-organizing ‘neural nets’.

A principal pioneer of cybernetic concepts of brain function was the Cambridge psychologist Kenneth Craik, who described thinking in terms of physical models analogous to physiological processes. Craik pointed to engineering examples, such as Kelvin's tide predictor, which predicted tides with a system of pulleys and levers. The essential cybernetic philosophy of neurophysiology is that the brain functions by such principles as feedback and information, represented by electro-chemical, physical activity in the nervous system. It is assumed that this creates mind: so, in principle, and no doubt in practice, machines can be fully mindful.

Influences

Winograd and Flores credit the influence of Humberto Maturana, a biologist who recasts the concepts of "language" and "living system" with a cybernetic eye [Maturana & Varela 1988], in shifting their opinions away from the AI perspective. They quote Maturana: "Learning is not a process of accumulation of representations of the environment; it is a continuous process of transformation of behavior through continuous change in the capacity of the nervous system to synthesize it. Recall does not depend on the indefinite retention of a structural invariant that represents an entity (an idea, image or symbol), but on the functional ability of the system to create, when certain recurrent demands are given, a behavior that satisfies the recurrent demands or that the observer would class as a reenacting of a previous one." [Maturana 1980] Cybernetics has directly affected software for intelligent training, knowledge representation, cognitive modeling, computer-supported coöperative work, and neural modeling. Useful results have been demonstrated in all these areas. Like AI, however, cybernetics has not produced recognizable solutions to the machine intelligence problem, not at least for domains considered complex in the metrics of symbolic processing. Many beguiling artifacts have been produced with an appeal more familiar in an entertainment medium or to organic life than a piece of software [Pask 1971]. Meantime, in a repetition of history in the 1950s, the influence of cybernetics is felt throughout the hard and soft sciences, as well as in AI. This time however it is cybernetics' epistemological stance — that all human knowing is constrained by our perceptions and our beliefs, and hence is subjective — that is its contribution to these fields. We must continue to wait to see if cybernetics leads to breakthroughs in the construction of intelligent artifacts of the complexity of a nervous system, or a brain.

Cybernetics Today

The term "cybernetics" has been widely misunderstood, perhaps for two broad reasons. First, its identity and boundary are difficult to grasp. The nature of its concepts and the breadth of its applications, as described above, make it difficult for non-practitioners to form a clear concept of cybernetics. This holds even for professionals of all sorts, as cybernetics never became a popular discipline in its own right; rather, its concepts and viewpoints seeped into many other disciplines, from sociology and psychology to design methods and post-modern thought. Second, the advent of the prefix "cyb" or "cyber" as a referent to either robots ("cyborgs") or the Internet ("cyberspace") further diluted its meaning, to the point of serious confusion to everyone except the small number of cybernetic experts.

However, the concepts and origins of cybernetics have become of greater interest recently, especially since around the year 2000. Lack of success by AI to create intelligent machines has increased curiosity toward alternative views of what a brain does [Ashby 1960] and alternative views of the biology of cognition [Maturana 1970]. There is growing recognition of the value of a "science of subjectivity" that encompasses both objective and subjective interactions, including conversation [Pask 1976]. Designers are rediscovering the influence of cybernetics on the tradition of 20th-century design methods, and the need for rigorous models of goals, interaction, and system limitations for the successful development of complex products and services, such as those delivered via today's software networks. And, as in any social cycle, students of history reach back with minds more open than was possible at the inception of cybernetics, to reinterpret the meaning and contribution of a previous era.

Robotics

Robot is a virtual or mechanical artificial agent. In practice, it is usually an electro-mechanical machine which is guided by computer or electronic programming, and is thus able to do tasks on its own. Another common characteristic is that by its appearance or movements, a robot often conveys a sense that it has intent or agency of its own.

While there is no single correct definition of "robot", a typical robot will have several or possibly all of the following characteristics.

It is an electric machine which has some ability to interact with physical objects and to be given electronic programming to do a specific task or to do a whole range of tasks or actions. It may also have some ability to perceive and absorb data on physical objects, or on its local physical environment, or to process data, or to respond to various stimuli. This is in contrast to a simple mechanical device such as a gear or a hydraulic press or any other item which has no processing ability and which does tasks through purely mechanical processes and motion.

Social impact

As robots have become more advanced and sophisticated, experts and academics have increasingly explored the questions of what ethics might govern robots' behavior, and whether robots might be able to claim any kind of social, cultural, ethical or legal rights. One scientific team has said that it is possible that a robot brain will exist by 2019. Others predict robot intelligence breakthroughs by 2050. Recent advances have made robotic behavior more sophisticated.

Vernor Vinge has suggested that a moment may come when computers and robots are smarter than humans. He calls this "the Singularity." He suggests that it may be somewhat or possibly very dangerous for humans. This is discussed by a philosophy called Singularitarianism.

In 2009, experts attended a conference to discuss whether computers and robots might be able to acquire any autonomy, and how much these abilities might pose a threat or hazard. They noted that some robots have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence." They noted that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls. Various media sources and scientific groups have noted separate trends in differing areas which might together result in greater robotic functionalities and autonomy, and which pose some inherent concerns.

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions. There are also concerns about technology which might allow some armed robots to be controlled mainly by other robots. The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions. Some public concerns about autonomous robots have received media attention, especially one robot, EATR, which can continually refuel itself using biomass and organic substances which it finds on battlefields or other local environments.

The Association for the Advancement of Artificial Intelligence has studied this topic in depth and its president has commissioned a study to look at this issue.

Some have suggested a need to build "Friendly AI", meaning that the advances which are already occurring with AI should also include an effort to make AI intrinsically friendly and humane. Several such measures reportedly already exist, with robot-heavy countries such as Japan and South Korea having begun to pass regulations requiring robots to be equipped with safety systems, and possibly sets of 'laws' akin to Asimov's Three Laws of Robotics. An official report was issued in 2009 by the Japanese government's Robot Industry Policy Committee. Chinese officials and researchers have issued a report suggesting a set of ethical rules, as well as a set of new legal guidelines referred to as "Robot Legal Studies." Some concern has been expressed over a possible occurrence of robots telling apparent falsehoods.

Advantages: Increased productivity, accuracy, and endurance

Many factory jobs are now performed by robots. This has led to cheaper mass-produced goods, including automobiles and electronics. Stationary manipulators used in factories have become the largest market for robots. In 2006, there were an estimated 3,540,000 service robots in use, and an estimated 950,000 industrial robots. A different estimate counted more than one million robots in operation worldwide in the first half of 2008, with roughly half in Asia, 32% in Europe, 16% in North America, 1% in Australasia and 1% in Africa.

Some examples of factory robots:

§ Car production: Over the last three decades automobile factories have become dominated by robots. A typical factory contains hundreds of industrial robots working on fully automated production lines, with one robot for every ten human workers. On an automated production line, a vehicle chassis on a conveyor is welded, glued, painted and finally assembled at a sequence of robot stations.

§ Packaging: Industrial robots are also used extensively for palletizing and packaging of manufactured goods, for example for rapidly taking drink cartons from the end of a conveyor belt and placing them into boxes, or for loading and unloading machining centers.

§ Electronics: Mass-produced printed circuit boards (PCBs) are almost exclusively manufactured by pick-and-place robots, typically with SCARA manipulators, which remove tiny electronic components from strips or trays, and place them on to PCBs with great accuracy. Such robots can place hundreds of thousands of components per hour, far out-performing a human in speed, accuracy, and reliability.

Disadvantages: Fears and concerns about robots have been repeatedly expressed in a wide range of books and films. A common theme is the development of a master race of conscious and highly intelligent robots, motivated to take over or destroy the human race. Some fictional robots are programmed to kill and destroy; others gain superhuman intelligence and abilities by upgrading their own software and hardware. Another common theme is the reaction, sometimes called the "uncanny valley", of unease and even revulsion at the sight of robots that mimic humans too closely. Frankenstein (1818), often called the first science fiction novel, has become synonymous with the theme of a robot or monster advancing beyond its creator. In the TV show, Futurama, the robots are portrayed as humanoid figures that live alongside humans, not as robotic butlers. They still work in industry, but these robots carry out daily lives.

Manuel De Landa has noted that "smart missiles" and autonomous bombs equipped with artificial perception can be considered robots, and they make some of their decisions autonomously. He believes this represents an important and dangerous trend in which humans are handing over important decisions to machines.

Marauding robots may have entertainment value, but unsafe use of robots constitutes an actual danger. A heavy industrial robot with powerful actuators and unpredictably complex behavior can cause harm, for instance by stepping on a human's foot or falling on a human. Most industrial robots operate inside a security fence which separates them from human workers, but not all. Two robot-caused deaths are those of Robert Williams and Kenji Urada. Robert Williams was struck by a robotic arm at a casting plant in Flat Rock, Michigan on January 25, 1979. 37-year-old Kenji Urada, a Japanese factory worker, was killed in 1981. Urada was performing routine maintenance on the robot, but neglected to shut it down properly, and was accidentally pushed into a grinding machine.

Artificial Intelligence Programming for Video Games

Today it is almost impossible to write professional style games without using at least some aspects of artificial intelligence. Artificial intelligence (AI) is a useful tool to use to help to create characters that have a choice of responses to games player's actions, but have to be able to act in a fairly unpredictable fashion.

Video game artificial intelligence is a programming area that tries to make the computer act in a similar way to human intelligence. There are a number of underlying principles behind video game AI, the major one being that of having a rule based system whereby information and rules are entered into a database, and when the video game AI is faced with a situation, it finds appropriate information and acts on it according to the set of rules that apply to the situation. If the AI database is large enough, then there is sufficient unpredictability in the database to produce a simulation of human choice.

What’s more Game artificial intelligence refers to techniques used in computer and video games to produce the illusion of intelligence in the behavior of non-player characters(NPCs). The techniques used typically draw upon existing methods from the field of artificial intelligence (AI). However, the term game AI is often used to refer to a broad set of algorithms that also include techniques from control theory, robotics, computer graphics and computer science in general.

Since game AI is centered on appearance of intelligence and good game play, its approach is very different from that of traditional AI; hacks and cheats are acceptable and, in many cases, the computer abilities must be toned down to give human players a sense of fairness. This, for example, is true in first-person shooter games, where NPC's otherwise perfect movement and aiming would be beyond human skill.

Advantages: Game AI/heuristic algorithms are used in a wide variety of quite disparate fields inside a game. The most obvious is in the control of any NPCs in the game, although scripting is currently the most common means of control. Pathfinding is another common use for AI, widely seen in real-time strategy games. Pathfinding is the method for determining how to get an NPC from one point on a map to another, taking into consideration the terrain, obstacles and possibly "fog of war". Game AI is also involved with dynamic game difficulty balancing, which consists in adjusting the difficulty in a video game in real-time based on the player's ability.

Disadvantages: Cheating AI (also called Rubberband AI) is a term used to describe the situation where the AI has bonuses over the players, such as having more hit-points, driving faster, or ignoring fog of war. The use of cheating in AI shows the limitations of the "intelligence" achievable artificially; generally speaking, in games where strategic creativity is important, humans could easily beat the AI after a minimum of trial and error if it were not for the bonuses. In the context of AI programming, cheating refers only to any privilege given specifically to the AI; this does not include the inhuman swiftness and accuracy natural to a computer, although a player might call that "cheating".

One common example of cheating AI is found in many racing games. If an AI opponent falls far enough behind the rest of the drivers it receives a boost in speed or other attributes, enabling it to catch up and/or again become competitive. This technique is known as "rubber banding" because it allows the AI character to quickly snap back into a competitive position. A similar method is also used in sports games such as the Madden NFL series. In more advanced games, NPC competitiveness may be achieved through dynamic game difficulty balancing, which can be considered fairer though still technically a cheat.

Argument and Comparison:

The ongoing success of applied Artificial Intelligence and of cognitive simulation seems assured. However, strong AI, which aims to duplicate human intellectual abilities, remains controversial. The reputation of this area of research has been damaged over the years by exaggerated claims of success that have appeared both in the popular media and in the professional journals. At the present time, even an embodied system displaying the overall intelligence of a cockroach is proving elusive, let alone a system rivaling a human being.

The difficulty of "scaling up" AI's so far relatively modest achievements cannot be overstated. Five decades of research in symbolic AI has failed to produce any firm evidence that a symbol-system can manifest human levels of general intelligence. Critics of nouvelle AI regard as mystical the view that high-level behaviours involving language-understanding, planning, and reasoning will somehow "emerge" from the interaction of basic behaviours like obstacle avoidance, gaze control and object manipulation. Connectionists have been unable to construct working models of the nervous systems of even the simplest living things. Caenorhabditis elegans, a much-studied worm, has approximately 300 neurons, whose pattern of interconnections is perfectly known. Yet connectionist models have failed to mimic the worm's simple nervous system. The "neurons" of connectionist theory are gross oversimplifications of the real thing.

However, this lack of substantial progress may simply be testimony to the difficulty of strong AI, not to its impossibility.

Let me turn to the very idea of strong artificial intelligence. Can a computer possibly be intelligent, think and understand? Noam Chomsky suggests that debating this question is pointless, for it is a question of decision, not fact: decision as to whether to adopt a certain extension of common usage. There is, Chomsky claims, no factual question as to whether any such decision is right or wrong--just as there is no question as to whether our decision to say that airplanes fly is right, or our decision not to say that ships swim is wrong. However, Chomsky is oversimplifying matters. Of course we could, if we wished, simply decide to describe bulldozers, for instance, as things that fly. But obviously it would be misleading to do so, since bulldozers are not appropriately similar to the other things that we describe as flying. The important questions are: could it ever be appropriate to say that computers are intelligent, think, and understand, and if so, what conditions must a computer satisfy in order to be so described?

Some authors offer the Turing test as a definition of intelligence: a computer is intelligent if and only if the test fails to distinguish it from a human being. However, Turing himself in fact pointed out that his test cannot provide a definition of intelligence. It is possible, he said, that a computer which ought to be described as intelligent might nevertheless fail the test because it is not capable of successfully imitating a human being. For example, why should an intelligent robot designed to oversee mining on the moon necessarily be able to pass itself off in conversation as a human being? If an intelligent entity can fail the test, then the test cannot function as a definition of intelligence.

It is even questionable whether a computer's passing the test would show that the computer is intelligent. In 1956 Claude Shannon and John McCarthy raised the objection to the test that it is possible in principle to design a program containing a complete set of "canned" responses to all the questions that an interrogator could possibly ask during the fixed time-span of the test. Like Parry, this machine would produce answers to the interviewer's questions by looking up appropriate responses in a giant table. This objection--which has in recent years been revived by Ned Block, Stephen White, and me-seems to show that in principle a system with no intelligence at all could pass the Turing test.

In fact AI has no real definition of intelligence to offer, not even in the sub-human case. Rats are intelligent, but what exactly must a research team achieve in order for it to be the case that the team has created an artifact as intelligent as a rat?

In the absence of a reasonably precise criterion for when an artificial system counts as intelligent, there is no way of telling whether a research program that aims at producing intelligent artifacts has succeeded or failed. One result of AI's failure to produce a satisfactory criterion of when a system counts as intelligent is that whenever AI achieves one of its goals. For example, programs that can summarize newspaper articles, or beat the world chess champion-critics are able to say "That's not intelligence!" (even critics who have previously maintained that no computer could possibly do the thing in question).

Saying about comparison of these technologies we can ensure that Artificial Intelligence is not completely available for our existing in this modern life that we are living now. Scientists are developing AI year by year, hoping that AI will become the most powerful technology mankind ever seen. It is difficult to compare them all because nowadays they all have their special incompletely role in this global world. We face a lot of Robotics machines everywhere, and in every corner we see the usage of Video Games that are attracting children with its extra-ordinary features. Therefore, in all hospitals and medical buildings people can find a number of technologies which are being used by human beings, in the same time helping us to overcome complex tasks which could be risky being done by human. There are many benefits from all these machines than getting harm. But no one can say that it will lead us to perfection. Our mission is just wait and see, to what it will proceed…

The role of Islamic community in Science:

Among those honored are researchers in Japan, Italy and the Netherlands, a country with a population of just 16-million. Yet the list does not include a single noteworthy breakthrough in any of the world's 56 Muslim nations, encompassing more than 1-billion people.
"Religious fundamentalism is always bad news for science," Pervez Amirali Hoodbhoy, a Pakistani Muslim physicist, recently wrote in an article on Islam and science for Physics Today.
"Scientific progress constantly demands that facts and hypotheses be checked. But there lies the problem: The scientific method is alien to traditional, unreformed religious thought."
While the reasons are many and often controversial, there is no doubt that the Muslim world lags far behind in scientific achievement and research:
* Muslim countries contribute less than 2 percent of the world's scientific literature. Spain alone produces almost as many scientific papers.
* In countries with substantial Muslim populations, the average number of scientists, engineers and technicians per 1,000 people is 8.5. The world average is 40.
* Muslim countries get so few patents that they don't even register on a bar graph comparison with other countries. Of the more than 3-million foreign inventions patented in the United States between 1977 and 2004, only 1,500 were developed in Muslim nations.
* In a survey by the Times of London, just two Muslim universities -- both in cosmopolitan Malaysia -- ranked among the top 200 universities worldwide.
Two Muslim scientists have won Nobel Prizes, but both did their groundbreaking work at Western institutions. Pakistan's Abdus Salam, who won the 1979 physics prize while in Britain, was barred from speaking at any university in his own country.

Today, many of the brightest scientific minds leave their countries to study in Western universities like Virginia Tech and the Massachusetts Institute of Technology, both of which have sizeable Muslim student associations. By some estimates, more than half of the science students from Arab countries never return home to work.