Wednesday, September 2, 2009

Computer Technology


A computer is a machine that manipulates data according to a set of instructions.

Although mechanical examples of computers have existed through much of recorded human history, the first electronic computers were developed in the mid-20th century (1940–1945). These were the size of a large room, consuming as much power as several hundred modern personal computers (PCs). Modern computers based on integrated circuits are millions to billions of times more capable than the early machines, and occupy a fraction of the space.[2] Simple computers are small enough to fit into a wristwatch, and can be powered by a watch battery.Personal computers in their various forms are icons of the Information Age and are what most people think of as "computers". The embedded computers found in many devices from MP3 players to fighter aircraft and from toys to industrial robots are however the most numerous.

The ability to store and execute lists of instructions called programs makes computers extremely versatile, distinguishing them from calculators. The Church–Turing thesis is a mathematical statement of this versatility: any computer with a certain minimum capability is, in principle, capable of performing the same tasks that any other computer can perform. Therefore computers ranging from a mobile phone to a supercomputer are all able to perform the same computational tasks, given enough time and storage capacity.

-History of computing


The Jacquard loom, on display at the Museum of Science and Industry in Manchester, England, was one of the first programmable devices.

The first use of the word "computer" was recorded in 1613, referring to a person who carried out calculations, or computations, and the word continued to be used in that sense until the middle of the 20th century. From the end of the 19th century onwards though, the word began to take on its more familiar meaning, describing a machine that carries out computations.

The history of the modern computer begins with two separate technologies—automated calculation and programmability—but no single device can be identified as the earliest computer, partly because of the inconsistent application of that term. Examples of early mechanical calculating devices include the abacus, theslide rule and arguably the astrolabe and the Antikythera mechanism (which dates from about 150–100 BC).Hero of Alexandria (c. 10–70 AD) built a mechanical theater which performed a play lasting 10 minutes and was operated by a complex system of ropes and drums that might be considered to be a means of deciding which parts of the mechanism performed which actions and when.

This is the essence of programmability.

The "castle clock", an astronomical clock invented by Al-Jazari in 1206, is considered to be the earliestprogrammable analog computer. It displayed the zodiac, the solar and lunar orbits, a crescent moon-shapedpointer travelling across a gateway causing automatic doors to open every hour, and five robotic musicians who played music when struck by levers operated by a camshaft attached to a water wheel. The length of day andnight could be re-programmed to compensate for the changing lengths of day and night throughout the year.

The Renaissance saw a re-invigoration of European mathematics and engineering. Wilhelm Schickard's 1623 device was the first of a number of mechanical calculators constructed by European engineers, but none fit the modern definition of a computer, because they could not be programmed.

In 1801, Joseph Marie Jacquard made an improvement to the textile loom by introducing a series of punched paper cards as a template which allowed his loom to weave intricate patterns automatically. The resulting Jacquard loom was an important step in the development of computers because the use of punched cards to define woven patterns can be viewed as an early, albeit limited, form of programmability.

It was the fusion of automatic calculation with programmability that produced the first recognizable computers. In 1837, Charles Babbage was the first to conceptualize and design a fully programmable mechanical computer, hisanalytical engine. Limited finances and Babbage's inability to resist tinkering with the design meant that the device was never completed.

In the late 1880s, Herman Hollerith invented the recording of data on a machine readable medium. Prior uses of machine readable media, above, had been for control, not data. "After some initial trials with paper tape, he settled on punched cards ..." To process these punched cards he invented the tabulator, and the keypunchmachines. These three inventions were the foundation of the modern information processing industry. Large-scale automated data processing of punched cards was performed for the 1890 United States Census by Hollerith's company, which later became the core of IBM. By the end of the 19th century a number of technologies that would later prove useful in the realization of practical computers had begun to appear: the punched card, Boolean algebra, the vacuum tube (thermionic valve) and the teleprinter.

During the first half of the 20th century, many scientific computing needs were met by increasingly sophisticatedanalog computers, which used a direct mechanical or electrical model of the problem as a basis for computation. However, these were not programmable and generally lacked the versatility and accuracy of modern digital computers.

Alan Turing is widely regarded to be the father of modern computer science. In 1936 Turing provided an influential formalisation of the concept of the algorithm and computation with the Turing machine. Of his role in the modern computer, Time Magazine in naming Turing one of the 100 most influential people of the 20th century, states: "The fact remains that everyone who taps at a keyboard, opening a spreadsheet or a word-processing program, is working on an incarnation of a Turing machine."

telecommunications technology. This effort was funded by ARPA (now DARPA), and thecomputer network that it produced was called the ARPANET. The technologies that made the Arpanet possible spread and evolved.

In time, the network spread beyond academic and military institutions and became known as the Internet. The emergence of networking involved a redefinition of the nature and boundaries of the computer. Computer operating systems and applications were modified to include the ability to define and access the resources of other computers on the network, such as peripheral devices, stored information, and the like, as extensions of the resources of an individual computer. Initially these facilities were available primarily to people working in high-tech environments, but in the 1990s the spread of applications like e-mail and the World Wide Web, combined with the development of cheap, fast networking technologies like Ethernet and ADSLsaw computer networking become almost ubiquitous. In fact, the number of computers that are networked is growing phenomenally. A very large proportion of personal computers regularly connect to the Internet to communicate and receive information. "Wireless" networking, often utilizing mobile phone networks, has meant networking is becoming increasingly ubiquitous even in mobile computing environments.

Examples computer:

1. Supercomputer and Mainframe

Supercomputer is a broad term for one of the fastest computers currently available. Supercomputers are very expensive and are employed for specialized applications that require immense amounts of mathematical calculations (number crunching). For example, weather forecasting requires a supercomputer. Other uses of supercomputers scientific simulations, (animated) graphics, fluid dynamic calculations, nuclear energy research, electronic design, and analysis of geological data (e.g. in petrochemical prospecting). Perhaps the best known supercomputer manufacturer is Cray Research.

Mainframe was a term originally referring to the cabinet containing the central processor unit or "main frame" of a room-filling Stone Age batch machine. After the emergence of smaller "minicomputer" designs in the early 1970s, the traditional big iron machines were described as "mainframe computers" and eventually just as mainframes. Nowadays a Mainframe is a very large and expensive computer capable of supporting hundreds, or even thousands, of users simultaneously. The chief difference between a supercomputer and a mainframe is that a supercomputer channels all its power into executing a few programs as fast as possible, whereas a mainframe uses its power to execute many programs concurrently. In some ways, mainframes are more powerful than supercomputers because they support more simultaneous programs. But supercomputers can execute a single program faster than a mainframe. The distinction between small mainframes and minicomputers is vague, depending really on how the manufacturer wants to market its machines.

2. Minicomputer

It is a midsize computer. In the past decade, the distinction between large minicomputers and small mainframes has blurred, however, as has the distinction between small minicomputers and workstations. But in general, a minicomputer is a multiprocessing system capable of supporting from up to 200 users simultaneously.

3. Workstation

It is a type of computer used for engineering applications (CAD/CAM), desktop publishing, software development, and other types of applications that require a moderate amount of computing power and relatively high quality graphics capabilities. Workstations generally come with a large, high-resolution graphics screen, at large amount of RAM, built-in network support, and a graphical user interface. Most workstations also have a mass storage device such as a disk drive, but a special type of workstation, called a diskless workstation, comes without a disk drive. The most common operating systems for workstations are UNIX and Windows NT. Like personal computers, most workstations are single-user computers. However, workstations are typically linked together to form a local-area network, although they can also be used as stand-alone systems.

N.B.: In networking, workstation refers to any computer connected to a local-area network. It could be a workstation or a personal computer.

4. Personal computer:

It can be defined as a small, relatively inexpensive computer designed for an individual user. In price, personal computers range anywhere from a few hundred pounds to over five thousand pounds. All are based on the microprocessor technology that enables manufacturers to put an entire CPU on one chip. Businesses use personal computers for word processing, accounting, desktop publishing, and for running spreadsheet and database management applications. At home, the most popular use for personal computers is for playing games and recently for surfing the Internet.

Personal computers first appeared in the late 1970s. One of the first and most popular personal computers was the Apple II, introduced in 1977 by Apple Computer. During the late 1970s and early 1980s, new models and competing operating systems seemed to appear daily. Then, in 1981, IBM entered the fray with its first personal computer, known as the IBM PC. The IBM PC quickly became the personal computer of choice, and most other personal computer manufacturers fell by the wayside. P.C. is short for personal computer or IBM PC. One of the few companies to survive IBM's onslaught was Apple Computer, which remains a major player in the personal computer marketplace. Other companies adjusted to IBM's dominance by building IBM clones, computers that were internally almost the same as the IBM PC, but that cost less. Because IBM clones used the same microprocessors as IBM PCs, they were capable of running the same software. Over the years, IBM has lost much of its influence in directing the evolution of PCs. Therefore after the release of the first PC by IBM the term PC increasingly came to mean IBM or IBM-compatible personal computers, to the exclusion of other types of personal computers, such as Macintoshes. In recent years, the term PC has become more and more difficult to pin down. In general, though, it applies to any personal computer based on an Intel microprocessor, or on an Intel-compatible microprocessor. For nearly every other component, including the operating system, there are several options, all of which fall under the rubric of PC

Today, the world of personal computers is basically divided between Apple Macintoshes and PCs. The principal characteristics of personal computers are that they are single-user systems and are based on microprocessors. However, although personal computers are designed as single-user systems, it is common to link them together to form a network. In terms of power, there is great variety. At the high end, the distinction between personal computers and workstations has faded. High-end models of the Macintosh and PC offer the same computing power and graphics capability as low-end workstations by Sun Microsystems, Hewlett-Packard, and DEC.

Personal Computer Types

Actual personal computers can be generally classified by size and chassis / case. The chassis or case is the metal frame that serves as the structural support for electronic components. Every computer system requires at least one chassis to house the circuit boards and wiring. The chassis also contains slots for expansion boards. If you want to insert more boards than there are slots, you will need an expansion chassis, which provides additional slots. There are two basic flavors of chassis designs–desktop models and tower models–but there are many variations on these two basic types. Then come the portable computers that are computers small enough to carry. Portable computers include notebook and subnotebook computers, hand-held computers, palmtops, and PDAs.

5. Tower model

The term refers to a computer in which the power supply, motherboard, and mass storage devices are stacked on top of each other in a cabinet. This is in contrast to desktop models, in which these components are housed in a more compact box. The main advantage of tower models is that there are fewer space constraints, which makes installation of additional storage devices easier.

6. Desktop model

A computer designed to fit comfortably on top of a desk, typically with the monitor sitting on top of the computer. Desktop model computers are broad and low, whereas tower model computers are narrow and tall. Because of their shape, desktop model computers are generally limited to three internal mass storage devices. Desktop models designed to be very small are sometimes referred to as slimline models.

7. Notebook computer

An extremely lightweight personal computer. Notebook computers typically weigh less than 6 pounds and are small enough to fit easily in a briefcase. Aside from size, the principal difference between a notebook computer and a personal computer is the display screen. Notebook computers use a variety of techniques, known as flat-panel technologies, to produce a lightweight and non-bulky display screen. The quality of notebook display screens varies considerably. In terms of computing power, modern notebook computers are nearly equivalent to personal computers. They have the same CPUs, memory capacity, and disk drives. However, all this power in a small package is expensive. Notebook computers cost about twice as much as equivalent regular-sized computers. Notebook computers come with battery packs that enable you to run them without plugging them in. However, the batteries need to be recharged every few hours.

8. Laptop computer

A small, portable computer -- small enough that it can sit on your lap. Nowadays, laptop computers are more frequently called notebook computers.

9. Subnotebook computer

A portable computer that is slightly lighter and smaller than a full-sized notebook computer. Typically, subnotebook computers have a smaller keyboard and screen, but are otherwise equivalent to notebook computers.

10. Hand-held computer

A portable computer that is small enough to be held in one’s hand. Although extremely convenient to carry, handheld computers have not replaced notebook computers because of their small keyboards and screens. The most popular hand-held computers are those that are specifically designed to provide PIM (personal information manager) functions, such as a calendar and address book. Some manufacturers are trying to solve the small keyboard problem by replacing the keyboard with an electronic pen. However, these pen-based devices rely on handwriting recognition technologies, which are still in their infancy. Hand-held computers are also called PDAs, palmtops and pocket computers.

11. Palmtop

A small computer that literally fits in your palm. Compared to full-size computers, palmtops are severely limited, but they are practical for certain functions such as phone books and calendars. Palmtops that use a pen rather than a keyboard for input are often called hand-held computers or PDAs. Because of their small size, most palmtop computers do not include disk drives. However, many contain PCMCIA slots in which you can insert disk drives, modems, memory, and other devices. Palmtops are also called PDAs, hand-held computers and pocket computers.

12. PDA

Short for personal digital assistant, a handheld device that combines computing, telephone/fax, and networking features. A typical PDA can function as a cellular phone, fax sender, and personal organizer. Unlike portable computers, most PDAs are pen-based, using a stylus rather than a keyboard for input. This means that they also incorporate handwriting recognition features. Some PDAs can also react to voice input by using voice recognition technologies. The field of PDA was pioneered by Apple Computer, which introduced the Newton MessagePad in 1993. Shortly thereafter, several other manufacturers offered similar products. To date, PDAs have had only modest success in the marketplace, due to their high price tags and limited applications. However, many experts believe that PDAs will eventually become common gadgets.

Supercomputer and Mainframe.

Supercomputer…..

· Supercomputer it handles large amount of scientific data.

· A supercomputer generates large amounts of heat and must be cooled. Cooling most supercomputers is a major HVAC problem.

· Information cannot move faster than the speed of light between two parts of a supercomputer. For this reason, a supercomputer that is many metres across must have latencies between its components measured at least in the tens of nanoseconds.

· Supercomputers consume and produce massive amounts of data in a very short period of time

· A supercomputer is a device for turning compute-bound problems into I/O-bound problems." Much work on external storage bandwidth is needed to ensure that this information can be transferred quickly and stored/retrieved correctly.

Mainframe…..

13. Mainframe, it is used for handling big organization.

1. 90% of IBM's mainframes have CICS transaction processing software installed.[8] Other software staples include the IMS and DB2 databases, and WebSphere MQ andWebSphere Application Server middleware.

1. As of 2004, IBM claimed over 200 new (21st century) mainframe customers — customers that had never previously owned a mainframe.

1. Most mainframes run continuously at over 70% busy. A 90% figure is typical, and modern mainframes tolerate sustained periods of 100% CPU utilization, queuing work according to business priorities without disrupting ongoing execution.[citation needed]

1. Mainframes have a historical reputation for being "expensive," but the modern reality is much different. As of late 2006, it is possible to buy and configure a complete IBM mainframe system (with software, storage, and support), under standard commercial use terms, for about $50,000 (U.S.). The price of z/OS starts at about $1,500 (U.S.) per year, including 24x7 telephone and Web support.[9]

1. In the unlikely event a mainframe needs repair, it is typically repaired without interruption to running applications. Also, memory, storage and processor modules of chips can be added or hot swapped without interrupting applications. It is not unusual for a mainframe to be continuously switched on for months or years at a stretch

Comparison between Supercomputer and Mainframe.

1. Both types of systems offer parallel processing, although this has not always been the case. Parallel processing (i.e., multiple CPU's executing instructions simultaneously) was used in supercomputers (e.g., the Cray-1) for decades before this feature appeared in mainframes, primarily due to cost at that time. Supercomputers typically expose parallel processing to the programmer in complex manners, while mainframes typically use it to run multiple tasks. One result of this difference is that adding processors to a mainframe often speeds up the entire workload transparently.

1. Supercomputers are optimized for complex computations that take place largely in memory, while mainframes are optimized for comparatively simple computations involving huge amounts of external data. For example, weather forecasting is suited to supercomputers, and insurance business or payroll processing applications are more suited to mainframes.

1. Supercomputers are often purpose-built for one or a very few specific institutional tasks (e.g. simulation and modeling). Mainframes typically handle a wider variety of tasks (e.g. data processing, warehousing). Consequently, most supercomputers can be one-off designs, whereas mainframes typically form part of a manufacturer's standard model lineup.

1. Mainframes tend to have numerous ancillary service processors assisting their main central processors (for cryptographic support, I/O handling, monitoring, memory handling, etc.) so that the actual "processor count" is much higher than would otherwise be obvious. Supercomputer design tends not to include as many service processors since they don't appreciably add to raw number-crunching power. This distinction is perhaps blurring over time as Moore's Law constraints encourage more specialization in server components.

1. Mainframes are exceptionally adept at batch processing, such as billing, owing to their heritage, decades of increasing customer expectations for batch improvements, and throughput-centric design. Supercomputers generally perform quite poorly in batch processing.

2. Ethic Value

What is Computer Ethics?

* This article first appeared in Terrell Ward Bynum, ed., Computers & Ethics, Blackwell, 1985, pp.266 – 75. (A special issue of the journal Metaphilosophy.)

James H. Moor…………

A Proposed Definition

Computers are special technology and they raise some special ethical issues. In this essay I will discuss what makes computers different from other technology and how this difference makes a difference in ethical considerations. In particular, I want to characterize computer ethics and show why this emerging field is both intellectually interesting and enormously important.

On my view, computer ethics is the analysis of the nature and social impact of computer technology and the corresponding formulation and justification of policies for the ethical use of such technology. I use the phrase “computer technology” because I take the subject matter of the field broadly to include computers and associated technology. For instance, I include concerns about software as well as hardware and concerns about networks connecting computers as well as computers themselves.

A typical problem in computer ethics arises because there is a policy vacuum about how computer technology should be used. Computers provide us with new capabilities and these in turn give us new choices for action. Often, either no policies for conduct in these situations exist or existing policies seem inadequate. A central task of computer ethics is to determine what we should do in such cases, i.e., to formulate policies to guide our actions. Of course, some ethical situations confront us as individuals and some as a society. Computer ethics includes consideration of both personal and social policies for the ethical use of computer technology.

Now it may seem that all that needs to be done is the mechanical application of an ethical theory to generate the appropriate policy. But this is usually not possible. A difficulty is that along with a policy vacuum there is often a conceptual vacuum. Although a problem in computer ethics may seem clear initially, a little reflection reveals a conceptual muddle. What is needed in such cases is an analysis which provides a coherent conceptual framework within which to formulate a policy for action. Indeed, much of the important work in computer ethics is devoted to proposing conceptual frameworks for understanding ethical problems involving computer technology.

An example may help to clarify the kind of conceptual work that is required. Let’s suppose we are trying to formulate a policy for protecting computer programs. Initially, the idea may seem clear enough. We are looking for a policy for protecting a kind of intellectual property. But then a number of questions which do not have obvious answers emerge. What is a computer program? Is it really intellectual property which can be owned or is it more like an idea, an algorithm, which is not owned by anybody? If a computer program is intellectual property, is it anexpression of an idea that is owned (traditionally protectable by copyright) or is it a process that is owned (traditionally protectable by patent)? Is a machine-readable program a copy of a human-readable program? Clearly, we need a conceptualization of the nature of a computer program in order to answer these kinds of questions. Moreover, these questions must be answered in order to formulate a useful policy for protecting computer programs. Notice that the conceptualization we pick will not only affect how a policy will be applied but to a certain extent what the facts are. For instance, in this case the conceptualization will determine when programs count as instances of the same program.

Even within a coherent conceptual framework, the formulation of a policy for using computer technology can be difficult. As we consider different policies we discover something about what we value and what we don’t. Because computer technology provides us with new possibilities for acting, new values emerge. For example, creating software has value in our culture which it didn’t have a few decades ago. And old values have to be reconsidered. For instance, assuming software is intellectual property, why should intellectual property be protected? In general, the consideration of alternative policies forces us to discover and make explicit what our value preferences are.

The mark of a basic problem in computer ethics is one in which computer technology is essentially involved and there is an uncertainty about what to do and even about how to understand the situation. Hence, not all ethical situations involving computers are central to computer ethics. If a burglar steals available office equipment including computers, then the burglar has done something legally and ethically wrong. But this is really an issue for general law and ethics. Computers are only accidentally involved in this situation, and there is no policy or conceptual vacuum to fill. The situation and the applicable policy are clear.

In one sense I am arguing for the special status of computer ethics as a field of study. Applied ethics is not simply ethics applied. But, I also wish to stress the underlying importance of general ethics and science to computer ethics. Ethical theory provides categories and procedures for determining what is ethically relevant. For example, what kinds of things are good? What are our basic rights? What is an impartial point of view? These considerations are essential in comparing and justifying policies for ethical conduct. Similarly, scientific information is crucial in ethical evaluations. It is amazing how many times ethical disputes turn not on disagreements about values but on disagreements about facts.

On my view, computer ethics is a dynamic and complex field of study which considers the relationships among facts, conceptualizations, policies and values with regard to constantly changing computer technology. Computer ethics is not a fixed set of rules which one shellacs and hangs on the wall. Nor is computer ethics the rote application of ethical principles to a value-free technology. Computer ethics requires us to think anew about the nature of computer technology and our values. Although computer ethics is a field between science and ethics and depends on them, it is also a discipline in its own right which provides both conceptualizations for understanding and policies for using computer technology.

Though I have indicated some of the intellectually interesting features of computer ethics, I have not said much about the problems of the field or about its practical importance. The only example I have used so far is the issue of protecting computer programs which may seem to be a very narrow concern. In fact, I believe the domain of computer ethics is quite large and extends to issues which affect all of us. Now I want to turn to a consideration of these issues and argue for the practical importance of computer ethics. I will proceed not by giving a list of problems but rather by analyzing the conditions and forces which generate ethical issues about computer technology. In particular, I want to analyze what is special about computers, what social impact computers will have, and what is operationally suspect about computing technology. I hope to show something of the nature of computer ethics by doing some computer ethics.

By James H. Moor.

No comments:

Post a Comment