From 1954 to 1958 American computer scientist John Backus of International Business Machines, Inc. (IBM) developed Fortran, an acronym for Formula Translation. It became a standard programming language because it could process mathematical formulas. Fortran and its variations are still in use today, especially in physics.
It is an interface between user and the computer machine.
Operating System (OS), in computer science, the basic software that controls a computer. The operating system has three major functions: It coordinates and manipulates computer hardware, such as computer memory, printers, disks, keyboard, mouse, and monitor;
it organizes files on a variety of storage media, such as floppy disk, hard drive, compact disc, digital video disc, and tape; and it manages hardware errors and the loss of data.
When a computer is turned on it searches for instructions in its memory. These instructions tell the computer how to start up. Usually, one of the first sets of these instructions is a special program called the operating system, which is the software that makes the computer work. It prompts the user (or other machines) for input and commands, reports the results of these commands and other operations, stores and manages data, and controls the sequence of the software and hardware actions. When the user requests that a program run, the operating system loads the program in the computer’s memory and runs the program. Popular operating systems, such as Microsoft Windows and the Macintosh system (Mac OS), have graphical user interfaces (GUIs)—that use tiny pictures, or icons, to represent various files and commands. To access these files or commands, the user clicks the mouse on the icon or presses a combination of keys on the keyboard. Some operating systems allow the user to carry out these tasks via voice, touch, or other input methods.
Operating System (OS), in computer science, the basic software that controls a computer. The operating system has three major functions: It coordinates and manipulates computer hardware, such as computer memory, printers, disks, keyboard, mouse, and monitor;
it organizes files on a variety of storage media, such as floppy disk, hard drive, compact disc, digital video disc, and tape; and it manages hardware errors and the loss of data.
When a computer is turned on it searches for instructions in its memory. These instructions tell the computer how to start up. Usually, one of the first sets of these instructions is a special program called the operating system, which is the software that makes the computer work. It prompts the user (or other machines) for input and commands, reports the results of these commands and other operations, stores and manages data, and controls the sequence of the software and hardware actions. When the user requests that a program run, the operating system loads the program in the computer’s memory and runs the program. Popular operating systems, such as Microsoft Windows and the Macintosh system (Mac OS), have graphical user interfaces (GUIs)—that use tiny pictures, or icons, to represent various files and commands. To access these files or commands, the user clicks the mouse on the icon or presses a combination of keys on the keyboard. Some operating systems allow the user to carry out these tasks via voice, touch, or other input methods.
Software, on the other hand, is the set of instructions a computer uses to manipulate data, such as a word-processing program or a video game. These programs are usually stored and transferred via the computer's hardware to and from the CPU. Software also governs how the hardware is utilized; for example, how information is retrieved from a storage device. The interaction between the input and output hardware is controlled by software called the Basic Input Output System software (BIOS).
SCSI, acronym for small computer system interface, a standard high-speed parallel interface defined by the X3T9.2 committee of the American National Standards Institute (ANSI). A SCSI interface is used for connecting microcomputers to peripheral devices, such as hard disks and printers, and to other computers and local area networks.
Up to seven devices, not including the computer, can be attached through a single SCSI connection (port) through sequential connections called a daisy chain. Each device has an address (priority number). Only one device at a time can transmit through the port; priority is given to the device with the highest address. A SCSI port is standard on the Apple Macintosh Plus, Macintosh SE, Macintosh II, the IBM RS/6000, and the IBM PS/2 model 65 and higher computers. It can be installed in IBM PC and compatible computers as an expansion board.
Up to seven devices, not including the computer, can be attached through a single SCSI connection (port) through sequential connections called a daisy chain. Each device has an address (priority number). Only one device at a time can transmit through the port; priority is given to the device with the highest address. A SCSI port is standard on the Apple Macintosh Plus, Macintosh SE, Macintosh II, the IBM RS/6000, and the IBM PS/2 model 65 and higher computers. It can be installed in IBM PC and compatible computers as an expansion board.
Graphics Card, also called a video adapter, translates computer software instructions into images displayed on the monitor.
The graphics card is a printed circuit board that plugs into a slot in the main circuit board (motherboard) of the computer and determines horizontal scan rate (how fast the monitor's electron beam scans from top to bottom of the screen) and how many colors can be displayed; it must be compatible with the monitor.
The graphics card is a printed circuit board that plugs into a slot in the main circuit board (motherboard) of the computer and determines horizontal scan rate (how fast the monitor's electron beam scans from top to bottom of the screen) and how many colors can be displayed; it must be compatible with the monitor.
Expansion Slot, in computer science, a socket inside a computer console, designed to hold expansion boards and connect them to the system bus (data pathway). The number of sockets, or slots, determines the amount of expansion allowed. Most personal computers have from three to eight expansion slots. Expansion slots provide a means of adding new or enhanced features or more memory to the system.
Sound Card, printed circuit board, or card, that can translate digital information into sound and back; also called a sound board or sound adapter. Sound cards plug into a slot on the motherboard (the main circuit board of a computer) and are usually connected to a pair of speakers . To play sounds, the sound card receives digital information from a stored file and turns it into an electrical signal it sends to the speakers, which produce the sound.
If the sound card is attached to a microphone, the sound card can take the incoming sound and convert it into digital information by sampling, or taking tiny sections of, the sound many times each second (the most sophisticated sound cards can take almost 200,000 samples per second, but most take around 50,000 to 100,000 samples per second). Each sample is given a number that represents the loudness and tone of the sample and the order in which it occurs in the entire sound.
If the sound card is attached to a microphone, the sound card can take the incoming sound and convert it into digital information by sampling, or taking tiny sections of, the sound many times each second (the most sophisticated sound cards can take almost 200,000 samples per second, but most take around 50,000 to 100,000 samples per second). Each sample is given a number that represents the loudness and tone of the sample and the order in which it occurs in the entire sound.
Expansion Board, in computer science, circuit board holding chips and other electronic components connected by conductive paths that is plugged into a computer's bus (main data-transfer path) to add functions or resources to the computer. Typical expansion boards add memory, disk-drive controllers, video support, parallel and serial ports, and internal modems. The simple terms board and card are used interchangeably by most people to refer to all expansion boards.
MS-DOS, acronym for Microsoft Disk Operating System. In computer science, MS-DOS—like other operating systems—oversees such operations as disk input and output, video support, keyboard control, and many internal functions related to program execution and file maintenance. MS-DOS is a single-tasking, single-user operating system with a command-line interface.
MS-DOS, acronym for Microsoft Disk Operating System. In computer science, MS-DOS—like other operating systems—oversees such operations as disk input and output, video support, keyboard control, and many internal functions related to program execution and file maintenance. MS-DOS is a single-tasking, single-user operating system with a command-line interface.
Operating systems continue to evolve. A recently developed type of OS called a distributed operating system is designed for a connected, but independent, collection of computers that share resources such as hard drives. In a distributed OS, a process can run on any computer in the network (presumably a computer that is idle) to increase that process's performance. All basic OS functions—such as maintaining file systems, ensuring reasonable behavior, and recovering data in the event of a partial failure—become more complex in distributed systems.
Research is also being conducted that would replace the keyboard with a means of using voice or handwriting for input. Currently these types of input are imprecise because people pronounce and write words very differently, making it difficult for a computer to recognize the same input from different users. However, advances in this field have led to systems that can recognize a small number of words spoken by a variety of people. In addition, software has been developed that can be taught to recognize an individual's handwriting.
Research is also being conducted that would replace the keyboard with a means of using voice or handwriting for input. Currently these types of input are imprecise because people pronounce and write words very differently, making it difficult for a computer to recognize the same input from different users. However, advances in this field have led to systems that can recognize a small number of words spoken by a variety of people. In addition, software has been developed that can be taught to recognize an individual's handwriting.
Operating systems commonly found on personal computers include UNIX, Macintosh OS, and Windows. UNIX, developed in 1969 at AT&T Bell Laboratories, is a popular operating system among academic computer users. Its popularity is due in large part to the growth of the interconnected computer network known as the Internet. Software for the Internet was initially designed for computers that ran UNIX. Variations of UNIX include SunOS (distributed by SUN Microsystems, Inc.), Xenix (distributed by Microsoft Corporation), and Linux (available for download free of charge and distributed commercially by companies such as Red Hat, Inc.). UNIX and its clones support multitasking and multiple users. Its file system provides a simple means of organizing disk files and lets users control access to their files. The commands in UNIX are not readily apparent, however, and mastering the system is difficult. Consequently, although UNIX is popular for professionals, it is not the operating system of choice for the general public.
Instead, windowing systems with graphical interfaces, such as Windows and the Macintosh OS, which make computer technology more accessible, are widely used in personal computers (PCs). However, graphical systems generally have the disadvantage of requiring more hardware—such as faster CPUs, more memory, and higher-quality monitors—than do command-oriented operating systems.
Instead, windowing systems with graphical interfaces, such as Windows and the Macintosh OS, which make computer technology more accessible, are widely used in personal computers (PCs). However, graphical systems generally have the disadvantage of requiring more hardware—such as faster CPUs, more memory, and higher-quality monitors—than do command-oriented operating systems.
OS/2, or Operating System 2, operating system developed for the personal computer in the mid-1980s by International Business Machines Corporation (IBM) and Microsoft Corporation. An operating system is the set of software programs that controls the basic functions of a computer. The operating system coordinates and stores data entering and leaving the computer, controls the computer’s hardware (such as computer memory, keyboard, and mouse), and handles system errors.
At the time OS/2 was introduced in late 1987, the most common personal computers were IBM-compatible computers running the Microsoft Disk Operating System (MS-DOS) and computers manufactured by Apple Computer Corporation running Apple’s system for the Macintosh (Mac OS). The Macintosh operating system included multitasking, a feature that enabled computers to run several applications simultaneously. In a computer network, multitasking allows several users on different computers to have simultaneous access to the same application or data set. OS/2 was the first operating system designed for IBM-compatible personal computers that allowed multitasking.
The first version of OS/2, version 1.0, was text-oriented and lacked a graphical user interface (GUI) that would allow users to enter commands with a point-and-click input device, such as a computer mouse. A year later IBM and Microsoft released OS/2 version 1.1, which included a GUI called the Presentation Manager. The Presentation Manager interface contained icons, pictures or words on the screen that users could click on with a mouse to enter instructions. OS/2 version 1.1 also allowed users to have multiple windows open (windows are portions of the screen that each contain a different document or program) and included pull-down lists of commands that the user could choose by clicking on them with their mouse.
IBM and Microsoft ended their collaboration on OS/2 in 1991 after Microsoft released its Windows software, a multitasking environment that ran on MS-DOS. In 1992 IBM released version 2.0 of OS/2, which ran Microsoft Windows programs and could perform multitasking of DOS operations. It also contained an object-oriented programming environment that allowed software designers to create programs using high-level, object-oriented programming languages.
Subsequent versions of OS/2 offered enhanced performance and multimedia capabilities, and in 1994 IBM announced that more than 5 million copies of OS/2 had been sold since its introduction. The same year, IBM introduced a new version of OS/2 called OS/2 Warp that featured improved performance, more multimedia capabilities, an array of integrated applications, and easy access to the Internet. IBM has continued to upgrade and extend OS/2 Warp.
At the time OS/2 was introduced in late 1987, the most common personal computers were IBM-compatible computers running the Microsoft Disk Operating System (MS-DOS) and computers manufactured by Apple Computer Corporation running Apple’s system for the Macintosh (Mac OS). The Macintosh operating system included multitasking, a feature that enabled computers to run several applications simultaneously. In a computer network, multitasking allows several users on different computers to have simultaneous access to the same application or data set. OS/2 was the first operating system designed for IBM-compatible personal computers that allowed multitasking.
The first version of OS/2, version 1.0, was text-oriented and lacked a graphical user interface (GUI) that would allow users to enter commands with a point-and-click input device, such as a computer mouse. A year later IBM and Microsoft released OS/2 version 1.1, which included a GUI called the Presentation Manager. The Presentation Manager interface contained icons, pictures or words on the screen that users could click on with a mouse to enter instructions. OS/2 version 1.1 also allowed users to have multiple windows open (windows are portions of the screen that each contain a different document or program) and included pull-down lists of commands that the user could choose by clicking on them with their mouse.
IBM and Microsoft ended their collaboration on OS/2 in 1991 after Microsoft released its Windows software, a multitasking environment that ran on MS-DOS. In 1992 IBM released version 2.0 of OS/2, which ran Microsoft Windows programs and could perform multitasking of DOS operations. It also contained an object-oriented programming environment that allowed software designers to create programs using high-level, object-oriented programming languages.
Subsequent versions of OS/2 offered enhanced performance and multimedia capabilities, and in 1994 IBM announced that more than 5 million copies of OS/2 had been sold since its introduction. The same year, IBM introduced a new version of OS/2 called OS/2 Warp that featured improved performance, more multimedia capabilities, an array of integrated applications, and easy access to the Internet. IBM has continued to upgrade and extend OS/2 Warp.
UNIX, in computer science, a powerful multiuser, multitasking operating system. Considered a very powerful operating system, UNIX is written in the C language and can be installed on virtually any computer.
UNIX was originally developed by Ken Thompson and Dennis Ritchie at AT&T Bell Laboratories in 1969 for use on minicomputers. In the early 1970s, many universities, research institutions, and companies began to expand on and improve UNIX. These efforts resulted in two main versions: BSD UNIX, a version developed at the University of California at Berkeley, and System V, developed by AT&T and its collaborators.
Many companies developed and marketed their own versions of UNIX in subsequent years. Variations of UNIX include AIX, a version of UNIX adapted by IBM to run on RISC-based workstations; A/UX, a graphical version for the Apple Macintosh; XENIX OS, developed by Microsoft Corporation for 16-bit microprocessors; SunOS, adapted and distributed by Sun Microsystems, Inc.; Mach, a UNIX-compatible operating system for the NeXT computer; and Linux, developed by Finnish computer engineer Linus Torvalds with collaborators worldwide.
UNIX was originally developed by Ken Thompson and Dennis Ritchie at AT&T Bell Laboratories in 1969 for use on minicomputers. In the early 1970s, many universities, research institutions, and companies began to expand on and improve UNIX. These efforts resulted in two main versions: BSD UNIX, a version developed at the University of California at Berkeley, and System V, developed by AT&T and its collaborators.
Many companies developed and marketed their own versions of UNIX in subsequent years. Variations of UNIX include AIX, a version of UNIX adapted by IBM to run on RISC-based workstations; A/UX, a graphical version for the Apple Macintosh; XENIX OS, developed by Microsoft Corporation for 16-bit microprocessors; SunOS, adapted and distributed by Sun Microsystems, Inc.; Mach, a UNIX-compatible operating system for the NeXT computer; and Linux, developed by Finnish computer engineer Linus Torvalds with collaborators worldwide.
Windows, in computer science, personal computer operating system sold by Microsoft Corporation that allows users to enter commands with a point-and-click device, such as a mouse, instead of a keyboard. An operating system is a set of programs that control the basic functions of a computer. The Windows operating system provides users with a graphical user interface (GUI), which allows them to manipulate small pictures, called icons, on the computer screen to issue commands. Windows is the most widely used operating system in the world. It is an extension of and replacement for Microsoft’s Disk Operating System (MS-DOS).
The Windows GUI is designed to be a natural, or intuitive, work environment for the user. With Windows, the user can move a cursor around on the computer screen with a mouse. By pointing the cursor at icons and clicking buttons on the mouse, the user can issue commands to the computer to perform an action, such as starting a program, accessing a data file, or copying a data file. Other commands can be reached through pull-down or click-on menu items. The computer displays the active area in which the user is working as a window on the computer screen. The currently active window may overlap with other previously active windows that remain open on the screen. This type of GUI is said to include WIMP features: windows, icons, menus, and pointing device (such as a mouse).
Computer scientists at the Xerox Corporation’s Palo Alto Research Center (PARC) invented the GUI concept in the early 1970s, but this innovation was not an immediate commercial success. In 1983 Apple Computer featured a GUI in its Lisa computer. This GUI was updated and improved in its Macintosh computer, introduced in 1984.
Microsoft began its development of a GUI in 1983 as an extension of its MS-DOS operating system. Microsoft’s Windows version 1.0 first appeared in 1985. In this version, the windows were tiled, or presented next to each other rather than overlapping. Windows version 2.0, introduced in 1987, was designed to resemble IBM’s OS/2 Presentation Manager, another GUI operating system. Windows version 2.0 included the overlapping window feature. The more powerful version 3.0 of Windows, introduced in 1990, and subsequent versions 3.1 and 3.11 rapidly made Windows the market leader in operating systems for personal computers, in part because it was prepackaged on new personal computers. It also became the favored platform for software development.
In 1993 Microsoft introduced Windows NT (New Technology). The Windows NT operating system offers 32-bit multitasking, which gives a computer the ability to run several programs simultaneously, or in parallel, at high speed. This operating system competes with IBM’s OS/2 as a platform for the intensive, high-end, networked computing environments found in many businesses.
In 1995 Microsoft released a new version of Windows for personal computers called Windows 95. Windows 95 had a sleeker and simpler GUI than previous versions. It also offered 32-bit processing, efficient multitasking, network connections, and Internet access. Windows 98, released in 1998, improved upon Windows 95.
In 1996 Microsoft debuted Windows CE, a scaled-down version of the Microsoft Windows platform designed for use with handheld personal computers. Windows 2000, released at the end of 1999, combined Windows NT technology with the Windows 98 graphical user interface. In 2000 a special edition of Windows known as Windows Millenium Edition, or Windows ME, provided a more stable version of the Windows 98 interface. In 2001 Microsoft released a new operating system known as Windows XP, the company’s first operating system for consumers that was not based on MS-DOS.
Other popular operating systems include the Macintosh System (Mac OS) from Apple Inc., OS/2 Warp from IBM (see OS/2), and UNIX and its variations, such as Linux.
The Windows GUI is designed to be a natural, or intuitive, work environment for the user. With Windows, the user can move a cursor around on the computer screen with a mouse. By pointing the cursor at icons and clicking buttons on the mouse, the user can issue commands to the computer to perform an action, such as starting a program, accessing a data file, or copying a data file. Other commands can be reached through pull-down or click-on menu items. The computer displays the active area in which the user is working as a window on the computer screen. The currently active window may overlap with other previously active windows that remain open on the screen. This type of GUI is said to include WIMP features: windows, icons, menus, and pointing device (such as a mouse).
Computer scientists at the Xerox Corporation’s Palo Alto Research Center (PARC) invented the GUI concept in the early 1970s, but this innovation was not an immediate commercial success. In 1983 Apple Computer featured a GUI in its Lisa computer. This GUI was updated and improved in its Macintosh computer, introduced in 1984.
Microsoft began its development of a GUI in 1983 as an extension of its MS-DOS operating system. Microsoft’s Windows version 1.0 first appeared in 1985. In this version, the windows were tiled, or presented next to each other rather than overlapping. Windows version 2.0, introduced in 1987, was designed to resemble IBM’s OS/2 Presentation Manager, another GUI operating system. Windows version 2.0 included the overlapping window feature. The more powerful version 3.0 of Windows, introduced in 1990, and subsequent versions 3.1 and 3.11 rapidly made Windows the market leader in operating systems for personal computers, in part because it was prepackaged on new personal computers. It also became the favored platform for software development.
In 1993 Microsoft introduced Windows NT (New Technology). The Windows NT operating system offers 32-bit multitasking, which gives a computer the ability to run several programs simultaneously, or in parallel, at high speed. This operating system competes with IBM’s OS/2 as a platform for the intensive, high-end, networked computing environments found in many businesses.
In 1995 Microsoft released a new version of Windows for personal computers called Windows 95. Windows 95 had a sleeker and simpler GUI than previous versions. It also offered 32-bit processing, efficient multitasking, network connections, and Internet access. Windows 98, released in 1998, improved upon Windows 95.
In 1996 Microsoft debuted Windows CE, a scaled-down version of the Microsoft Windows platform designed for use with handheld personal computers. Windows 2000, released at the end of 1999, combined Windows NT technology with the Windows 98 graphical user interface. In 2000 a special edition of Windows known as Windows Millenium Edition, or Windows ME, provided a more stable version of the Windows 98 interface. In 2001 Microsoft released a new operating system known as Windows XP, the company’s first operating system for consumers that was not based on MS-DOS.
Other popular operating systems include the Macintosh System (Mac OS) from Apple Inc., OS/2 Warp from IBM (see OS/2), and UNIX and its variations, such as Linux.
Operating systems control different computer processes, such as running a spreadsheet program or accessing information from the computer's memory. One important process is interpreting commands, enabling the user to communicate with the computer. Some command interpreters are text oriented, requiring commands to be typed in or to be selected via function keys on a keyboard. Other command interpreters use graphics and let the user communicate by pointing and clicking on an icon, an on-screen picture that represents a specific command. Beginners generally find graphically oriented interpreters easier to use, but many experienced computer users prefer text-oriented command interpreters.
Operating systems are either single-tasking or multitasking. The more primitive single-tasking operating systems can run only one process at a time. For instance, when the computer is printing a document, it cannot start another process or respond to new commands until the printing is completed.
All modern operating systems are multitasking and can run several processes simultaneously. In most computers, however, there is only one central processing unit (CPU; the computational and control unit of the computer), so a multitasking OS creates the illusion of several processes running simultaneously on the CPU. The most common mechanism used to create this illusion is time-slice multitasking, whereby each process is run individually for a fixed period of time. If the process is not completed within the allotted time, it is suspended and another process is run. This exchanging of processes is called context switching. The OS performs the “bookkeeping” that preserves a suspended process. It also has a mechanism, called a scheduler, that determines which process will be run next. The scheduler runs short processes quickly to minimize perceptible delay. The processes appear to run simultaneously because the user's sense of time is much slower than the processing speed of the computer.
Operating systems can use a technique known as virtual memory to run processes that require more main memory than is actually available. To implement this technique, space on the hard drive is used to mimic the extra memory needed. Accessing the hard drive is more time-consuming than accessing main memory, however, so performance of the computer slows.
Operating systems are either single-tasking or multitasking. The more primitive single-tasking operating systems can run only one process at a time. For instance, when the computer is printing a document, it cannot start another process or respond to new commands until the printing is completed.
All modern operating systems are multitasking and can run several processes simultaneously. In most computers, however, there is only one central processing unit (CPU; the computational and control unit of the computer), so a multitasking OS creates the illusion of several processes running simultaneously on the CPU. The most common mechanism used to create this illusion is time-slice multitasking, whereby each process is run individually for a fixed period of time. If the process is not completed within the allotted time, it is suspended and another process is run. This exchanging of processes is called context switching. The OS performs the “bookkeeping” that preserves a suspended process. It also has a mechanism, called a scheduler, that determines which process will be run next. The scheduler runs short processes quickly to minimize perceptible delay. The processes appear to run simultaneously because the user's sense of time is much slower than the processing speed of the computer.
Operating systems can use a technique known as virtual memory to run processes that require more main memory than is actually available. To implement this technique, space on the hard drive is used to mimic the extra memory needed. Accessing the hard drive is more time-consuming than accessing main memory, however, so performance of the computer slows.
Compiler, in computer science, computer program that translates source code, instructions in a program written by a software engineer, into object code, those same instructions written in a language the computer’s central processing unit (CPU) can read and interpret. Software engineers write source code using high level programming languages that people can understand. Computers cannot directly execute source code, but need a compiler to translate these instructions into a low level language called machine code.
Compilers collect and reorganize (compile) all the instructions in a given set of source code to produce object code. Object code is often the same as or similar to a computer’s machine code. If the object code is the same as the machine language, the computer can run the program immediately after the compiler produces its translation. If the object code is not in machine language, other programs—such as assemblers, binders, linkers, and loaders—finish the translation.
Most programming languages—such as C, C++, and Fortran—use compilers, but some—such as BASIC and LISP—use interpreters. An interpreter analyzes and executes each line of source code one-by-one. Interpreters produce initial results faster than compilers, but the source code must be re-interpreted with every use and interpreted languages are usually not as sophisticated as compiled languages.
Most computer languages use different versions of compilers for different types of computers or operating systems, so one language may have different compilers for personal computers (PC) and Apple Macintosh computers. Many different manufacturers often produce versions of the same programming language, so compilers for a language may vary between manufacturers.
Consumer software programs are compiled and translated into machine language before they are sold. Some manufacturers provide source code, but usually only programmers find the source code useful. Thus programs bought off the shelf can be executed, but usually their source code cannot be read or modified.
Compilers collect and reorganize (compile) all the instructions in a given set of source code to produce object code. Object code is often the same as or similar to a computer’s machine code. If the object code is the same as the machine language, the computer can run the program immediately after the compiler produces its translation. If the object code is not in machine language, other programs—such as assemblers, binders, linkers, and loaders—finish the translation.
Most programming languages—such as C, C++, and Fortran—use compilers, but some—such as BASIC and LISP—use interpreters. An interpreter analyzes and executes each line of source code one-by-one. Interpreters produce initial results faster than compilers, but the source code must be re-interpreted with every use and interpreted languages are usually not as sophisticated as compiled languages.
Most computer languages use different versions of compilers for different types of computers or operating systems, so one language may have different compilers for personal computers (PC) and Apple Macintosh computers. Many different manufacturers often produce versions of the same programming language, so compilers for a language may vary between manufacturers.
Consumer software programs are compiled and translated into machine language before they are sold. Some manufacturers provide source code, but usually only programmers find the source code useful. Thus programs bought off the shelf can be executed, but usually their source code cannot be read or modified.
Hungarian-American mathematician John Kemeny and American mathematician Thomas Kurtz at Dartmouth College in Hanover, New Hampshire, developed BASIC (Beginner’s All-purpose Symbolic Instruction Code) in 1964. The language was easier to learn than its predecessors and became popular due to its friendly, interactive nature and its inclusion on early personal computers. Unlike languages that require all their instructions to be translated into machine code first, BASIC is turned into machine language line by line as the program runs. BASIC commands typify high-level languages because of their simplicity and their closeness to natural human language.
Assembly language uses easy-to-remember commands that are more understandable to programmers than machine-language commands. Each machine language instruction has an equivalent command in assembly language. For example, in one Intel assembly language, the statement “MOV A, B” instructs the computer to copy data from location A to location B. The same instruction in machine code is a string of 16 0s and 1s. Once an assembly-language program is written, it is converted to a machine-language program by another program called an assembler.
Assembly language is fast and powerful because of its correspondence with machine language. It is still difficult to use, however, because assembly-language instructions are a series of abstract codes and each instruction carries out a relatively simple task. In addition, different CPUs use different machine languages and therefore require different programs and different assembly languages. Assembly language is sometimes inserted into a high-level language program to carry out specific hardware tasks or to speed up parts of the high-level program that are executed frequently.
Assembly Language, in computer science, a type of low-level computer programming language in which each statement corresponds directly to a single machine instruction. Assembly languages are thus specific to a given processor. After writing an assembly language program, the programmer must use the assembler specific to the microprocessor to translate the assembly language into machine code. Assembly language provides precise control of the computer, but assembly language programs written for one type of computer must be rewritten to operate on another type. Assembly language might be used instead of a high-level language for any of three major reasons: speed, control, and preference. Programs written in assembly language usually run faster than those generated by a compiler; use of assembly language lets a programmer interact directly with the hardware (processor, memory, display, and input/output ports).
Assembly language is fast and powerful because of its correspondence with machine language. It is still difficult to use, however, because assembly-language instructions are a series of abstract codes and each instruction carries out a relatively simple task. In addition, different CPUs use different machine languages and therefore require different programs and different assembly languages. Assembly language is sometimes inserted into a high-level language program to carry out specific hardware tasks or to speed up parts of the high-level program that are executed frequently.
Assembly Language, in computer science, a type of low-level computer programming language in which each statement corresponds directly to a single machine instruction. Assembly languages are thus specific to a given processor. After writing an assembly language program, the programmer must use the assembler specific to the microprocessor to translate the assembly language into machine code. Assembly language provides precise control of the computer, but assembly language programs written for one type of computer must be rewritten to operate on another type. Assembly language might be used instead of a high-level language for any of three major reasons: speed, control, and preference. Programs written in assembly language usually run faster than those generated by a compiler; use of assembly language lets a programmer interact directly with the hardware (processor, memory, display, and input/output ports).
High-Level Language, in computer science, a computer language that provides a certain level of abstraction from the underlying machine language through the use of declarations, control statements, and other syntactical structures. In practice, the term comprises every computer language above assembly language.
One especially powerful feature of OOP languages is a property known as inheritance. Inheritance allows an object to take on the characteristics and functions of other objects to which it is functionally connected. Programmers connect objects by grouping them together in different classes and by grouping the classes into hierarchies. These classes and hierarchies allow programmers to define the characteristics and functions of objects without needing to repeat source code, the coded instructions in a program. Thus, using OOP languages can greatly reduce the time it takes for a programmer to write an application, and also can reduce the size of the program. OOP languages are flexible and adaptable, so programs or parts of programs can be used for more than one task. Programs written with OOP languages are generally shorter in length and contain fewer bugs, or mistakes, than those written with non-OOP languages.
Low-Level Language, in computer science, a computerprogramming language that is machine-dependent and/or that offers few control instructions and data types. Each statement in a program written in a low-level language usually corresponds to one machine instruction. Assembly language is considered a low-level language.
Computer programs that can be run by a computer’s operating system are called executables. An executable program is a sequence of extremely simple instructions known as machine code. These instructions are specific to the individual computer’s CPU and associated hardware; for example, Intel Pentium and Power PC microprocessor chips each have different machine languages and require different sets of codes to perform the same task. Machine code instructions are few in number (roughly 20 to 200, depending on the computer and the CPU). Typical instructions are for copying data from a memory location or for adding the contents of two memory locations (usually registers in the CPU). Complex tasks require a sequence of these simple instructions. Machine code instructions are binary—that is, sequences of bits (0s and 1s). Because these sequences are long strings of 0s and 1s and are usually not easy to understand, computer instructions usually are not written in machine code. Instead, computer programmers write code in languages known as an assembly language or a high-level language.
Computer programs that can be run by a computer’s operating system are called executables. An executable program is a sequence of extremely simple instructions known as machine code. These instructions are specific to the individual computer’s CPU and associated hardware; for example, Intel Pentium and Power PC microprocessor chips each have different machine languages and require different sets of codes to perform the same task. Machine code instructions are few in number (roughly 20 to 200, depending on the computer and the CPU). Typical instructions are for copying data from a memory location or for adding the contents of two memory locations (usually registers in the CPU). Complex tasks require a sequence of these simple instructions. Machine code instructions are binary—that is, sequences of bits (0s and 1s). Because these sequences are long strings of 0s and 1s and are usually not easy to understand, computer instructions usually are not written in machine code. Instead, computer programmers write code in languages known as an assembly language or a high-level language.
Object Code, in computer science, translated version of source code—the statements of a particular computer program that can either be read by the computer directly, or read by the computer after it is further translated. Object code may also be called target code or the object program.
Object-Oriented Programming (OOP), in computer science, type of high-level computer language that uses self-contained, modular instruction sets for defining and manipulating aspects of a computer program. These discrete, predefined instruction sets are called objects and they may be used to define variables, data structures, and procedures for executing data operations. In OOP, objects have built-in rules for communicating with one another. By using objects as stable, preexisting building blocks, programmers can pursue their main objectives and specify tasks from the top down, manipulating or combining objects to modify existing programs and to create entirely new ones.
Object-oriented programming began with Simula, a programming language developed from 1962 to 1967 by Ole-Johan Dahl and Kristen Nygaard at the Norwegian Computing Center in Oslo, Norway. Simula introduced definitive features of OOP, including objects and inheritance. In the early 1970s Alan Kay developed Smalltalk, another early OOP language, at the Palo Alto Research Center of the Xerox Corporation. Smalltalk made revolutionary use of a graphical user interface (GUI), a feature that allows the user to select commands using a mouse. GUIs became a central feature of operating systems such as Macintosh OS and Windows.
The most popular OOP language is C++, developed by Bjarne Stroustrup at Bell Laboratories in the early 1980s. In 1995 Sun Microsystems, Inc., released Java, an OOP language that can run on most types of computers regardless of platform. In some ways Java represents a simplified version of C++ but adds other features and capabilities as well, and it is particularly well suited for writing interactive applications to be used on the World Wide Web.
Object-oriented programming began with Simula, a programming language developed from 1962 to 1967 by Ole-Johan Dahl and Kristen Nygaard at the Norwegian Computing Center in Oslo, Norway. Simula introduced definitive features of OOP, including objects and inheritance. In the early 1970s Alan Kay developed Smalltalk, another early OOP language, at the Palo Alto Research Center of the Xerox Corporation. Smalltalk made revolutionary use of a graphical user interface (GUI), a feature that allows the user to select commands using a mouse. GUIs became a central feature of operating systems such as Macintosh OS and Windows.
The most popular OOP language is C++, developed by Bjarne Stroustrup at Bell Laboratories in the early 1980s. In 1995 Sun Microsystems, Inc., released Java, an OOP language that can run on most types of computers regardless of platform. In some ways Java represents a simplified version of C++ but adds other features and capabilities as well, and it is particularly well suited for writing interactive applications to be used on the World Wide Web.
Object-oriented programming (OOP) languages, such as C++ and Java, are based on traditional high-level languages, but they enable a programmer to think in terms of collections of cooperating objects instead of lists of commands. Objects, such as a circle, have properties such as the radius of the circle and the command that draws it on the computer screen. Classes of objects can inherit features from other classes of objects. For example, a class defining squares can inherit features such as right angles from a class defining rectangles. This set of programming classes simplifies the programmer’s task, resulting in more “reusable” computer code. Reusable code allows a programmer to use code that has already been designed, written, and tested. This makes the programmer’s task easier, and it results in more reliable and efficient programs.
Other high-level languages in use today include C, C++, Ada, Pascal, LISP, Prolog, COBOL, Visual Basic, and Java. Some languages, such as the “markup languages” known as HTML, XML, and their variants, are intended to display data, graphics, and media selections, especially for users of the World Wide Web. Markup languages are often not considered programming languages, but they have become increasingly sophisticated.
Other high-level languages in use today include C, C++, Ada, Pascal, LISP, Prolog, COBOL, Visual Basic, and Java. Some languages, such as the “markup languages” known as HTML, XML, and their variants, are intended to display data, graphics, and media selections, especially for users of the World Wide Web. Markup languages are often not considered programming languages, but they have become increasingly sophisticated.
Programming languages contain the series of commands that create software. A CPU has a limited set of instructions known as machine code that it is capable of understanding. The CPU can understand only this language. All other programming languages must be converted to machine code for them to be understood. Computer programmers, however, prefer to use other computer languages that use words or other commands because they are easier to use. These other languages are slower because the language must be translated first so that the computer can understand it. The translation can lead to code that may be less efficient to run than code written directly in the machine’s language.
Source Code, in computer science, human-readable program statements written in a high-level or assembly language, as opposed to object code, which is derived from the source code and designed to be machine-readable.
Vector Graphics, in computer science, a method of generating images that uses mathematical descriptions to determine the position, length, and direction in which lines are to be drawn. In vector graphics, objects are created as collections of lines, rather than as patterns of individual dots (pixels), as is the case with raster graphics.
Raster Graphics, in computer science, a method of generating graphics in which images are stored as multitudes of small, independently controlled dots (pixels) arranged in rows and columns. Raster graphics treats an image as a collection of such dots.
PostScript Font, in computer science, a font defined in terms of the PostScript page-description language rules and intended to be printed on a PostScript-compatible printer. PostScript fonts are distinguished from bit-mapped fonts by their smoothness, detail, and faithfulness to standards of quality established in the typographic industry. Fonts that appear on the screen—for example, as bit-mapped characters in a graphical user interface—are called screen fonts. When a document displayed in a screen font is sent to a PostScript printer, the printer uses the PostScript version if the font exists. If the font doesn't exist but a version is installed on the computer, that font is downloaded. If there is no PostScript font installed in either the printer or the computer, the bit-mapped font is translated into PostScript and the printer prints text using the bit-mapped font.
Pixel, in computer science, short for picture element; sometimes called a pel. One spot in a rectilinear grid of thousands of such spots that are individually “painted” to form an image produced on the screen by a computer or on paper by a printer. Just as a bit is the smallest unit of information a computer can process, a pixel is the smallest element that display or print hardware and software can manipulate in creating letters, numbers, or graphics.
An image can also be represented in more than two colors—for example, in a range of grays.
If a pixel has only two color values (typically black and white), it can be encoded by 1 bit of information. If more than 2 bits are used to represent a pixel, a larger range of colors or shades of gray can be represented: 2 bits for four colors or shades of gray, 4 bits for sixteen colors, and so on. Typically, an image of two colors is called a bit map, and an image of more than two colors is called a pixel map.
An image can also be represented in more than two colors—for example, in a range of grays.
If a pixel has only two color values (typically black and white), it can be encoded by 1 bit of information. If more than 2 bits are used to represent a pixel, a larger range of colors or shades of gray can be represented: 2 bits for four colors or shades of gray, 4 bits for sixteen colors, and so on. Typically, an image of two colors is called a bit map, and an image of more than two colors is called a pixel map.
Object-Oriented Graphics, also called structured graphics. Computer graphics that are based on the use of “construction elements” (graphics primitives), such as lines, curves, circles, and squares. Object-oriented graphics, used in applications such as computer-aided design and drawing and illustration programs, describe an image mathematically as a set of instructions for creating the objects in the image. This approach contrasts with bit-mapped graphics, the other widely used approach to creating images, which represents a graphic as a group of black and white or colored dots arranged in a certain pattern. Object-oriented graphics enable the user to manipulate objects as entire units—for example, to change the length of a line or enlarge a circle—whereas bit-mapped graphics require repainting individual dots in the line or circle. Because objects are described mathematically, object-oriented graphics can also be layered, rotated, and magnified relatively easily.
Halftone, the printed reproduction of a photograph or other illustration as a set of tiny, evenly spaced spots of variable diameter that, when printed, visually blur together to appear as shades of gray. Many printers used in desktop publishing, notably laser printers and digital imagesetters, are able to print halftone images. In traditional publishing, halftones are created by photographing an image through a meshlike screen; the darker the shade at a particular point in the image, the larger the spot in the resulting photograph. In desktop publishing, halftone spots are created electronically by mapping each gray level onto a collection of dots (called a spot) printed by the laser printer or imagesetter.
Font, traditionally, a set of characters of the same typeface (such as Courier), style (such as italic), stroke weight (such as bold), and size. A font is not to be confused with a typeface. Font refers to all the characters available in a particular size, style, and weight for a particular design; typeface refers to the design itself. Fonts are used by computers for on-screen displays and by printers for hard-copy output. In both cases, fonts are created either from bit maps (patterns of dots) or from outlines (as defined by a set of mathematical formulas). Programs that allow the use of different fonts are able to send information about typeface and size to a printer, even if they are not able to simulate different typefaces on the screen. The printer can then reproduce the font, provided either that the capability is built in or that a font description is available to the printer.
False-Color Imagery, graphic technique that displays images in false (not true-to-life) colors to enhance certain features. False-color imagery is widely used in displaying electronic images taken by spacecraft; for example, Earth-survey satellites such as Landsat. Any colors can be selected by a computer processing the data received from the satellite or spacecraft.
Clipboard, in relation to computers, a special memory resource maintained by operating systems such as the Apple Macintosh operating system, Microsoft Windows, and the OS/2 Presentation Manager. A clipboard stores a copy of the last information that was “copied” or “cut.” A “paste” operation passes data from the clipboard to the current program. A clipboard allows information to be transferred from one program to another, provided the second program can read data generated by the first. Data copied using the clipboard is static and will not reflect later changes.
Clip Art, in computer applications, such as desktop publishing and word-processing programs, collections of graphics that can be copied and added to brochures, newsletters, and other documents created with the programs. A wide variety of collections are available; some are packaged with programs, others can be purchased separately. Depending on the program, clip art images may be disassembled to allow a person to use only part of an image.
Bit-Mapped Graphics, in computer science, computer graphics that are stored and held as collections of bits in memory locations corresponding to pixels on the screen. Bit-mapped graphics are typical of paint programs, which treat images as collections of dots rather than as shapes. Within a computer's memory, a bit-mapped graphic is represented as an array (group) of bits that describe the characteristics of the individual pixels making up the image. Bit-mapped graphics displayed in color require several to many bits per pixel, each describing some aspect of the color of a single spot on the screen.
Bit Image, in computer science, a sequential collection of bits that represents, in memory, an image to be displayed on the screen, particularly in systems having a graphical user interface. Each bit in a bit image corresponds to one pixel (dot) on the screen. The screen itself, for example, represents a single bit image; similarly, the dot patterns for all the characters in a font represent a bit image of the font. On a computer such as the Macintosh 512K, which has a black-and-white screen, the bit values in a bit image can be either 0, to display white, or 1, to display black. The “pattern” of 0's and 1's in the bit image then determines the pattern of white and black dots forming an image on the screen. On a Macintosh or other computer that supports color, the corresponding description of on-screen bits is called a pixel image because more than one bit is needed to represent each pixel.
Byte, a unit of information built from bits, the smallest units of information used in computers. Bits have one of two absolute values, either 0 or 1. These bit values physically correspond to whether transistors and other electronic circuitry in a computer are on or off. A byte is usually composed of 8 bits, although bytes composed of 16 bits are also used. See Number Systems.
The particular sequence of bits in the byte encodes a unit of information such as a keyboard character. One byte typically represents a single character such as a number, letter, or symbol. Most computers operate by manipulating groups of 2, 4, or 8 bytes called words.
Software designers use computers and software to combine bytes in complex ways and create meaningful data in the form of text files or binary files (files that contain data to be processed and interpreted by a computer). Bits and bytes are the basis for creating all meaningful information and programs on computers. For example, bits form bytes, which represent characters and can be combined to form words, sentences, paragraphs, and ultimately entire documents.
Bytes are the key unit for measuring quantity of data. Data quantity is commonly measured in kilobytes (1024 bytes), megabytes (1,048,576 bytes), or gigabytes (about 1 billion bytes). A regular, floppy disk normally holds 1.44 megabytes of data, which equates to approximately 1,400,000 keyboard characters, among other types of data. At this storage capacity, a single disk can hold a document approximately 700 pages long, with 2000 characters per page.
The term byte was first used in 1956 by Germanborn American computer scientist Werner Buchholz to prevent confusion with the word bit. He described a byte as a group of bits used to encode a character. The eight-bit byte was created that year and was soon adopted by the computer industry as a standard.
The number of bits used by a computer’s Central Processing Unit (CPU) for addressing information represents one measure of a computer’s speed and power. Computers today often use 16, 32, or 64 bits in groups of 2, 4, and 8 bytes in their addressing.
The particular sequence of bits in the byte encodes a unit of information such as a keyboard character. One byte typically represents a single character such as a number, letter, or symbol. Most computers operate by manipulating groups of 2, 4, or 8 bytes called words.
Software designers use computers and software to combine bytes in complex ways and create meaningful data in the form of text files or binary files (files that contain data to be processed and interpreted by a computer). Bits and bytes are the basis for creating all meaningful information and programs on computers. For example, bits form bytes, which represent characters and can be combined to form words, sentences, paragraphs, and ultimately entire documents.
Bytes are the key unit for measuring quantity of data. Data quantity is commonly measured in kilobytes (1024 bytes), megabytes (1,048,576 bytes), or gigabytes (about 1 billion bytes). A regular, floppy disk normally holds 1.44 megabytes of data, which equates to approximately 1,400,000 keyboard characters, among other types of data. At this storage capacity, a single disk can hold a document approximately 700 pages long, with 2000 characters per page.
The term byte was first used in 1956 by Germanborn American computer scientist Werner Buchholz to prevent confusion with the word bit. He described a byte as a group of bits used to encode a character. The eight-bit byte was created that year and was soon adopted by the computer industry as a standard.
The number of bits used by a computer’s Central Processing Unit (CPU) for addressing information represents one measure of a computer’s speed and power. Computers today often use 16, 32, or 64 bits in groups of 2, 4, and 8 bytes in their addressing.
Although microprocessors are still technically considered to be hardware, portions of their function are also associated with computer software. Since microprocessors have both hardware and software aspects they are therefore often referred to as firmware
Graphical User Interface (GUI), in computer science, a display format that enables the user to choose commands, start programs, and see lists of files and other options by pointing to pictorial representations (icons) and lists of menu items on the screen (see User Interface). Choices can generally be activated either with the keyboard or with a mouse. GUIs were inspired from the pioneering research of computer scientists at the Xerox Corporation's Palo Alto Research Center in the 1970s. Modern GUIs are used on the Macintosh operating system, Microsoft Windows, and the OS/2 Presentation Manager.
For application developers, GUIs offer an environment that processes the direct interaction with the computer. This frees the developer to concentrate on the application without worrying about the details of screen display, mouse control, or keyboard input. It also provides programmers standard controlling mechanisms for frequently repeated tasks such as opening windows and dialog boxes. Another benefit is that applications written for a GUI are device-independent: As the interface changes to support new input and output devices, such as a large-screen monitor or an optical storage device, the applications can, without modification, use those devices.
For application developers, GUIs offer an environment that processes the direct interaction with the computer. This frees the developer to concentrate on the application without worrying about the details of screen display, mouse control, or keyboard input. It also provides programmers standard controlling mechanisms for frequently repeated tasks such as opening windows and dialog boxes. Another benefit is that applications written for a GUI are device-independent: As the interface changes to support new input and output devices, such as a large-screen monitor or an optical storage device, the applications can, without modification, use those devices.
Although microprocessors are still technically considered to be hardware, portions of their function are also associated with computer software. Since microprocessors have both hardware and software aspects they are therefore often referred to as firmware
Byte, a unit of information built from bits, the smallest units of information used in computers. Bits have one of two absolute values, either 0 or 1. These bit values physically correspond to whether transistors and other electronic circuitry in a computer are on or off. A byte is usually composed of 8 bits, although bytes composed of 16 bits are also used. See Number Systems.
The particular sequence of bits in the byte encodes a unit of information such as a keyboard character. One byte typically represents a single character such as a number, letter, or symbol. Most computers operate by manipulating groups of 2, 4, or 8 bytes called words.
Software designers use computers and software to combine bytes in complex ways and create meaningful data in the form of text files or binary files (files that contain data to be processed and interpreted by a computer). Bits and bytes are the basis for creating all meaningful information and programs on computers. For example, bits form bytes, which represent characters and can be combined to form words, sentences, paragraphs, and ultimately entire documents.
Bytes are the key unit for measuring quantity of data. Data quantity is commonly measured in kilobytes (1024 bytes), megabytes (1,048,576 bytes), or gigabytes (about 1 billion bytes). A regular, floppy disk normally holds 1.44 megabytes of data, which equates to approximately 1,400,000 keyboard characters, among other types of data. At this storage capacity, a single disk can hold a document approximately 700 pages long, with 2000 characters per page.
The term byte was first used in 1956 by Germanborn American computer scientist Werner Buchholz to prevent confusion with the word bit. He described a byte as a group of bits used to encode a character. The eight-bit byte was created that year and was soon adopted by the computer industry as a standard.
The number of bits used by a computer’s Central Processing Unit (CPU) for addressing information represents one measure of a computer’s speed and power. Computers today often use 16, 32, or 64 bits in groups of 2, 4, and 8 bytes in their addressing.
The particular sequence of bits in the byte encodes a unit of information such as a keyboard character. One byte typically represents a single character such as a number, letter, or symbol. Most computers operate by manipulating groups of 2, 4, or 8 bytes called words.
Software designers use computers and software to combine bytes in complex ways and create meaningful data in the form of text files or binary files (files that contain data to be processed and interpreted by a computer). Bits and bytes are the basis for creating all meaningful information and programs on computers. For example, bits form bytes, which represent characters and can be combined to form words, sentences, paragraphs, and ultimately entire documents.
Bytes are the key unit for measuring quantity of data. Data quantity is commonly measured in kilobytes (1024 bytes), megabytes (1,048,576 bytes), or gigabytes (about 1 billion bytes). A regular, floppy disk normally holds 1.44 megabytes of data, which equates to approximately 1,400,000 keyboard characters, among other types of data. At this storage capacity, a single disk can hold a document approximately 700 pages long, with 2000 characters per page.
The term byte was first used in 1956 by Germanborn American computer scientist Werner Buchholz to prevent confusion with the word bit. He described a byte as a group of bits used to encode a character. The eight-bit byte was created that year and was soon adopted by the computer industry as a standard.
The number of bits used by a computer’s Central Processing Unit (CPU) for addressing information represents one measure of a computer’s speed and power. Computers today often use 16, 32, or 64 bits in groups of 2, 4, and 8 bytes in their addressing.
To function, hardware requires physical connections that allow components to communicate and interact. A bus provides a common interconnected system composed of a group of wires or circuitry that coordinates and moves information between the internal parts of a computer. A computer bus consists of two channels, one that the CPU uses to locate data, called the address bus, and another to send the data to that address, called the data bus. A bus is characterized by two features: how much information it can manipulate at one time, called the bus width, and how quickly it can transfer these data.
A serial connection is a wire or set of wires used to transfer information from the CPU to an external device such as a mouse, keyboard, modem, scanner, and some types of printers. This type of connection transfers only one piece of data at a time, and is therefore slow. The advantage to using a serial connection is that it provides effective connections over long distances.
A parallel connection uses multiple sets of wires to transfer blocks of information simultaneously. Most scanners and printers use this type of connection. A parallel connection is much faster than a serial connection, but it is limited to distances of less than 3 m (10 ft) between the CPU and the external device.
A serial connection is a wire or set of wires used to transfer information from the CPU to an external device such as a mouse, keyboard, modem, scanner, and some types of printers. This type of connection transfers only one piece of data at a time, and is therefore slow. The advantage to using a serial connection is that it provides effective connections over long distances.
A parallel connection uses multiple sets of wires to transfer blocks of information simultaneously. Most scanners and printers use this type of connection. A parallel connection is much faster than a serial connection, but it is limited to distances of less than 3 m (10 ft) between the CPU and the external device.
Bit, abbreviation for binary digit, the smallest unit of information in a computer. A bit is represented by the numbers 1 and 0, which correspond to the states on and off, true and false, or yes and no.
Bits are the building blocks for all information processing that goes on in digital electronics and computers. Bits actually represent the state of a transistor in the logic circuits of a computer. The number 1 (meaning on, yes, or true) is used to represent a transistor with current flowing through it—essentially a closed switch. The number 0 (meaning off, no, or false) is used to represent a transistor with no current flowing through it—an open switch. All computer information processing can be understood in terms of vast arrays of transistors (3.1 million transistors on the Pentium chip) switching on and off, depending on the bit value they have been assigned.
Bits are usually combined into larger units called bytes. A byte is composed of eight bits. The values that a byte can take on range between 00000000 (0 in decimal notation) and 11111111 (255 in decimal notation). This means that a byte can represent 28 (2 raised to the eighth power) or 256 possible states (0-255). Bytes are combined into groups of 1 to 8 bytes called words. The size of the words used by a computer’s central processing unit (CPU) depends on the bit-processing ability of the CPU. A 32-bit processor, for example, can use words that are up to four bytes long (32 bits).
Computers are often classified by the number of bits they can process at one time, as well as by the number of bits used to represent addresses in their main memory (RAM). Computer graphics are described by the number of bits used to represent pixels (short for picture elements), the smallest identifiable parts of an image. In monochrome images, each pixel is made up of one bit. In 256-color and gray-scale images, each pixel is made up of one byte (eight bits). In true color images, each pixel is made up of at least 24 bits.
The term bit was introduced by John Tukey, an American statistician and early computer scientist. He first used the term in 1946, as a shortened form of the term binary digit.
Bits are the building blocks for all information processing that goes on in digital electronics and computers. Bits actually represent the state of a transistor in the logic circuits of a computer. The number 1 (meaning on, yes, or true) is used to represent a transistor with current flowing through it—essentially a closed switch. The number 0 (meaning off, no, or false) is used to represent a transistor with no current flowing through it—an open switch. All computer information processing can be understood in terms of vast arrays of transistors (3.1 million transistors on the Pentium chip) switching on and off, depending on the bit value they have been assigned.
Bits are usually combined into larger units called bytes. A byte is composed of eight bits. The values that a byte can take on range between 00000000 (0 in decimal notation) and 11111111 (255 in decimal notation). This means that a byte can represent 28 (2 raised to the eighth power) or 256 possible states (0-255). Bytes are combined into groups of 1 to 8 bytes called words. The size of the words used by a computer’s central processing unit (CPU) depends on the bit-processing ability of the CPU. A 32-bit processor, for example, can use words that are up to four bytes long (32 bits).
Computers are often classified by the number of bits they can process at one time, as well as by the number of bits used to represent addresses in their main memory (RAM). Computer graphics are described by the number of bits used to represent pixels (short for picture elements), the smallest identifiable parts of an image. In monochrome images, each pixel is made up of one bit. In 256-color and gray-scale images, each pixel is made up of one byte (eight bits). In true color images, each pixel is made up of at least 24 bits.
The term bit was introduced by John Tukey, an American statistician and early computer scientist. He first used the term in 1946, as a shortened form of the term binary digit.
To function, hardware requires physical connections that allow components to communicate and interact. A bus provides a common interconnected system composed of a group of wires or circuitry that coordinates and moves information between the internal parts of a computer. A computer bus consists of two channels, one that the CPU uses to locate data, called the address bus, and another to send the data to that address, called the data bus. A bus is characterized by two features: how much information it can manipulate at one time, called the bus width, and how quickly it can transfer these data.
A serial connection is a wire or set of wires used to transfer information from the CPU to an external device such as a mouse, keyboard, modem, scanner, and some types of printers. This type of connection transfers only one piece of data at a time, and is therefore slow. The advantage to using a serial connection is that it provides effective connections over long distances.
A parallel connection uses multiple sets of wires to transfer blocks of information simultaneously. Most scanners and printers use this type of connection. A parallel connection is much faster than a serial connection, but it is limited to distances of less than 3 m (10 ft) between the CPU and the external device.
A serial connection is a wire or set of wires used to transfer information from the CPU to an external device such as a mouse, keyboard, modem, scanner, and some types of printers. This type of connection transfers only one piece of data at a time, and is therefore slow. The advantage to using a serial connection is that it provides effective connections over long distances.
A parallel connection uses multiple sets of wires to transfer blocks of information simultaneously. Most scanners and printers use this type of connection. A parallel connection is much faster than a serial connection, but it is limited to distances of less than 3 m (10 ft) between the CPU and the external device.
Hardware (computer), equipment involved in the function of a computer. Computer hardware consists of the components that can be physically handled. The function of these components is typically divided into three main categories: input, output, and storage. Components in these categories connect to microprocessors, specifically, the computer's central processing unit (CPU), the electronic circuitry that provides the computational ability and control of the computer, via wires or circuitry called a bus.