The "memory wall" is the growing disparity of speed between CPU and memory outside the CPU chip. An important reason for this disparity is the limited communication bandwidth beyond chip boundaries. From 1986 to 2000, CPU speed improved at an annual rate of 55% while memory speed only improved at 10%. Given these trends, it was expected that memory latency would become an overwhelming bottleneck in computer performance. [3]
Currently, CPU speed improvements have slowed significantly partly due to major physical barriers and partly because current CPU designs have already hit the memory wall in some sense. Intel summarized these causes in their Platform 2015 documentation (PDF)
“First of all, as chip geometries shrink and clock frequencies rise, the transistor leakage current increases, leading to excess power consumption and heat (more on power consumption below). Secondly, the advantages of higher clock speeds are in part negated by memory latency, since memory access times have not been able to keep pace with increasing clock frequencies. Third, for certain applications, traditional serial architectures are becoming less efficient as processors get faster (due to the so-called Von Neumann bottleneck), further undercutting any gains that frequency increases might otherwise buy. In addition, partly due to limitations in the means of producing inductance within solid state devices, resistance-capacitance (RC) delays in signal transmission are growing as feature sizes shrink, imposing an additional bottleneck that frequency increases don't address.”
The RC delays in signal transmission were also noted in Clock Rate versus IPC: The End of the Road for Conventional Microarchitectures which projects a maximum of 12.5% average annual CPU performance improvement between 2000 and 2014. The data on Intel Processors clearly shows a slowdown in performance improvements in recent processors. However, Intel's new processors, Core 2 Duo (codenamed Conroe) show a significant improvement over previous Pentium 4 processors; due to a more efficient architecture, performance increased while clock rate actually decreased.
Saturday, April 18, 2009
DDR2 RAM

Modern types of writable RAM generally store a bit of data in either the state of a flip-flop, as in SRAM (static RAM), or as a charge in a capacitor (or transistor gate), as in DRAM (dynamic RAM), EPROM, EEPROM and Flash. Some types have circuitry to detect and/or correct random faults called memory errors in the stored data, using parity bits or error correction codes. RAM of the read-only type, ROM, instead uses a metal mask to permanently enable/disable selected transistors, instead of storing a charge in them.
As both SRAM and DRAM are volatile, other forms of computer storage, such as disks and magnetic tapes, have been used as persistent storage in traditional computers. Many newer products instead rely on flash memory to maintain data when not in use, such as PDAs or small music players. Certain personal computers, such as many rugged computers and netbooks, have also replaced magnetic disks with flash drives. With flash memory, only the NOR type is capable of true random access, allowing direct code execution, and is therefore often used instead of ROM; the lower cost NAND type is commonly used for bulk storage in memory cards and solid-state drives.
History of RAM
An early type of widespread writable random access memory was the magnetic core memory, developed from 1949 to 1952, and subsequently used in most computers up until the development of the static and dynamic integrated RAM circuits in the late 1960s and early 1970s. Before this, computers used relays, delay line memory or various kinds of vacuum tube arrangements to implement "main" memory functions (i.e., hundreds or thousands of bits), some of which were random access, some not. Latches built out of vacuum tube triodes, and later, out of discrete transistors, were used for smaller and faster memories such as registers and (random access) register banks. Prior to the development of integrated ROM circuits, permanent (or read-only) random access memory was often constructed using semiconductor diode matrices driven by address decoders.
History of hard disk drives
The commercial usage of hard disk drives began in 1956 with the shipment of an IBM 305 RAMAC system including IBM Model 350 disk storage[1].
For many years, hard disk drives were large, cumbersome devices, more suited to use in the protected environment of a data center or large office than in a harsh industrial environment (due to their delicacy), or small office or home (due to their size and power consumption). Before the early 1980s, most hard disk drives had 8-inch (actually, 210 - 195 mm) or 14-inch platters, required an equipment rack or a large amount of floor space (especially the large removable-media drives, which were frequently comparable in size to washing machines), and in many cases needed high-current and/or three-phase power hookups due to the large motors they used. Because of this, hard disk drives were not commonly used with microcomputers until after 1980, when Seagate Technology introduced the ST-506, the first 5.25-inch hard drives, with a formatted capacity of 5 megabytes.
The capacity of hard drives has grown exponentially over time. With early personal computers, a drive with a 20 megabyte capacity was considered large. During the mid to late 1990s, when PCs were capable of storing not just text files and documents but pictures, music, and video, internal drives were made with 8 to 20 GB capacities. As of early 2009, desktop hard disk drives typically have a capacity of 320 to 500 gigabytes, while the largest-capacity drives are 2 terabytes.
1950s - 1970s
Main article: early IBM disk storage
The IBM 350 Disk File, invented by Reynold Johnson, was introduced in 1956 with the IBM 305 RAMAC computer. This drive had fifty 24 inch platters, with a total capacity of five million characters. A single head assembly having two heads was used for access to all the platters, making the average access time very slow (just under 1 second).
The IBM 1301 Disk Storage Unit[2], announced in 1961, introduced the usage of a head for each data surface with the heads having self acting air bearings (flying heads).
The first disk drive to use removable media was the IBM 1311 drive, which used the IBM 1316 disk pack to store two million characters.
In 1973, IBM introduced the IBM 3340 "Winchester" disk drive, the first significant commercial use of low mass and low load heads with lubricated media. All modern disk drives now use this technology and/or derivatives thereof. Project head designer/lead designer Kenneth Haughton named it after the Winchester 30-30 rifle after the developers called it the "30-30" because of it was planned to have two 30 MB spindles; however, the actual product shipped with two spindles for data modules of either 35 MB or 70 MB[3].
1980s - PC era
Internal drives became the system of choice on PCs in the 1980s. Most microcomputer hard disk drives in the early 1980s were not sold under their manufacturer's names, but by OEMs as part of larger peripherals (such as the Corvus Disk System and the Apple ProFile). The IBM PC/XT had an internal hard disk drive, however, and this started a trend toward buying "bare" drives (often by mail order) and installing them directly into a system.
External hard drives remained popular for much longer on the Apple Macintosh and other platforms. Every Mac made between 1986 and 1998 has a SCSI port on the back, making external expansion easy; also, "toaster" Compact Macs did not have easily accessible hard drive bays (or, in the case of the Mac Plus, any hard drive bay at all), so on those models, external SCSI disks were the only reasonable option
For many years, hard disk drives were large, cumbersome devices, more suited to use in the protected environment of a data center or large office than in a harsh industrial environment (due to their delicacy), or small office or home (due to their size and power consumption). Before the early 1980s, most hard disk drives had 8-inch (actually, 210 - 195 mm) or 14-inch platters, required an equipment rack or a large amount of floor space (especially the large removable-media drives, which were frequently comparable in size to washing machines), and in many cases needed high-current and/or three-phase power hookups due to the large motors they used. Because of this, hard disk drives were not commonly used with microcomputers until after 1980, when Seagate Technology introduced the ST-506, the first 5.25-inch hard drives, with a formatted capacity of 5 megabytes.
The capacity of hard drives has grown exponentially over time. With early personal computers, a drive with a 20 megabyte capacity was considered large. During the mid to late 1990s, when PCs were capable of storing not just text files and documents but pictures, music, and video, internal drives were made with 8 to 20 GB capacities. As of early 2009, desktop hard disk drives typically have a capacity of 320 to 500 gigabytes, while the largest-capacity drives are 2 terabytes.
1950s - 1970s
Main article: early IBM disk storage
The IBM 350 Disk File, invented by Reynold Johnson, was introduced in 1956 with the IBM 305 RAMAC computer. This drive had fifty 24 inch platters, with a total capacity of five million characters. A single head assembly having two heads was used for access to all the platters, making the average access time very slow (just under 1 second).
The IBM 1301 Disk Storage Unit[2], announced in 1961, introduced the usage of a head for each data surface with the heads having self acting air bearings (flying heads).
The first disk drive to use removable media was the IBM 1311 drive, which used the IBM 1316 disk pack to store two million characters.
In 1973, IBM introduced the IBM 3340 "Winchester" disk drive, the first significant commercial use of low mass and low load heads with lubricated media. All modern disk drives now use this technology and/or derivatives thereof. Project head designer/lead designer Kenneth Haughton named it after the Winchester 30-30 rifle after the developers called it the "30-30" because of it was planned to have two 30 MB spindles; however, the actual product shipped with two spindles for data modules of either 35 MB or 70 MB[3].
1980s - PC era
Internal drives became the system of choice on PCs in the 1980s. Most microcomputer hard disk drives in the early 1980s were not sold under their manufacturer's names, but by OEMs as part of larger peripherals (such as the Corvus Disk System and the Apple ProFile). The IBM PC/XT had an internal hard disk drive, however, and this started a trend toward buying "bare" drives (often by mail order) and installing them directly into a system.
External hard drives remained popular for much longer on the Apple Macintosh and other platforms. Every Mac made between 1986 and 1998 has a SCSI port on the back, making external expansion easy; also, "toaster" Compact Macs did not have easily accessible hard drive bays (or, in the case of the Mac Plus, any hard drive bay at all), so on those models, external SCSI disks were the only reasonable option
Generations
SSI, MSI and LSI
The first integrated circuits contained only a few transistors. Called "Small-Scale Integration" (SSI), they used circuits containing transistors numbering in the tens.
SSI circuits were crucial to early aerospace projects, and vice-versa. Both the Minuteman missile and Apollo program needed lightweight digital computers for their inertial guidance systems; the Apollo guidance computer led and motivated the integrated-circuit technology[citation needed], while the Minuteman missile forced it into mass-production.
These programs purchased almost all of the available integrated circuits from 1960 through 1963, and almost alone provided the demand that funded the production improvements to get the production costs from $1000/circuit (in 1960 dollars) to merely $25/circuit (in 1963 dollars).[citation needed] They began to appear in consumer products at the turn of the decade, a typical application being FM inter-carrier sound processing in television receivers.
The next step in the development of integrated circuits, taken in the late 1960s, introduced devices which contained hundreds of transistors on each chip, called "Medium-Scale Integration" (MSI).
They were attractive economically because while they cost little more to produce than SSI devices, they allowed more complex systems to be produced using smaller circuit boards, less assembly work (because of fewer separate components), and a number of other advantages.
Further development, driven by the same economic factors, led to "Large-Scale Integration" (LSI) in the mid 1970s, with tens of thousands of transistors per chip.
Integrated circuits such as 1K-bit RAMs, calculator chips, and the first microprocessors, that began to be manufactured in moderate quantities in the early 1970s, had under 4000 transistors. True LSI circuits, approaching 10000 transistors, began to be produced around 1974, for computer main memories and second-generation microprocessors.
The first integrated circuits contained only a few transistors. Called "Small-Scale Integration" (SSI), they used circuits containing transistors numbering in the tens.
SSI circuits were crucial to early aerospace projects, and vice-versa. Both the Minuteman missile and Apollo program needed lightweight digital computers for their inertial guidance systems; the Apollo guidance computer led and motivated the integrated-circuit technology[citation needed], while the Minuteman missile forced it into mass-production.
These programs purchased almost all of the available integrated circuits from 1960 through 1963, and almost alone provided the demand that funded the production improvements to get the production costs from $1000/circuit (in 1960 dollars) to merely $25/circuit (in 1963 dollars).[citation needed] They began to appear in consumer products at the turn of the decade, a typical application being FM inter-carrier sound processing in television receivers.
The next step in the development of integrated circuits, taken in the late 1960s, introduced devices which contained hundreds of transistors on each chip, called "Medium-Scale Integration" (MSI).
They were attractive economically because while they cost little more to produce than SSI devices, they allowed more complex systems to be produced using smaller circuit boards, less assembly work (because of fewer separate components), and a number of other advantages.
Further development, driven by the same economic factors, led to "Large-Scale Integration" (LSI) in the mid 1970s, with tens of thousands of transistors per chip.
Integrated circuits such as 1K-bit RAMs, calculator chips, and the first microprocessors, that began to be manufactured in moderate quantities in the early 1970s, had under 4000 transistors. True LSI circuits, approaching 10000 transistors, began to be produced around 1974, for computer main memories and second-generation microprocessors.
ULSI, WSI, SOC and 3D-IC
To reflect further growth of the complexity, the term ULSI that stands for "Ultra-Large Scale Integration" was proposed for chips of complexity of more than 1 million transistors.
Wafer-scale integration (WSI) is a system of building very-large integrated circuits that uses an entire silicon wafer to produce a single "super-chip". Through a combination of large size and reduced packaging, WSI could lead to dramatically reduced costs for some systems, notably massively parallel supercomputers. The name is taken from the term Very-Large-Scale Integration, the current state of the art when WSI was being developed.
System-on-a-Chip (SoC or SOC) is an integrated circuit in which all the components needed for a computer or other system are included on a single chip. The design of such a device can be complex and costly, and building disparate components on a single piece of silicon may compromise the efficiency of some elements. However, these drawbacks are offset by lower manufacturing and assembly costs and by a greatly reduced power budget: because signals among the components are kept on-die, much less power is required (see Packaging, above).
Three Dimensional Integrated Circuit (3D-IC) has two or more layers of active electronic components that are integrated both vertically and horizontally into a single circuit. Communication between layers uses on-die signaling, so power consumption is much lower than in equivalent separate circuits. Judicious use of short vertical wires can substantially reduce overall wire length for faster operation.
Wafer-scale integration (WSI) is a system of building very-large integrated circuits that uses an entire silicon wafer to produce a single "super-chip". Through a combination of large size and reduced packaging, WSI could lead to dramatically reduced costs for some systems, notably massively parallel supercomputers. The name is taken from the term Very-Large-Scale Integration, the current state of the art when WSI was being developed.
System-on-a-Chip (SoC or SOC) is an integrated circuit in which all the components needed for a computer or other system are included on a single chip. The design of such a device can be complex and costly, and building disparate components on a single piece of silicon may compromise the efficiency of some elements. However, these drawbacks are offset by lower manufacturing and assembly costs and by a greatly reduced power budget: because signals among the components are kept on-die, much less power is required (see Packaging, above).
Three Dimensional Integrated Circuit (3D-IC) has two or more layers of active electronic components that are integrated both vertically and horizontally into a single circuit. Communication between layers uses on-die signaling, so power consumption is much lower than in equivalent separate circuits. Judicious use of short vertical wires can substantially reduce overall wire length for faster operation.
Integrated circuit packaging

Early USSR made integrated circuit
The earliest integrated circuits were packaged in ceramic flat packs, which continued to be used by the military for their reliability and small size for many years. Commercial circuit packaging quickly moved to the dual in-line package (DIP), first in ceramic and later in plastic. In the 1980s pin counts of VLSI circuits exceeded the practical limit for DIP packaging, leading to pin grid array (PGA) and leadless chip carrier (LCC) packages. Surface mount packaging appeared in the early 1980s and became popular in the late 1980s, using finer lead pitch with leads formed as either gull-wing or J-lead, as exemplified by small-outline integrated circuit -- a carrier which occupies an area about 30 – 50% less than an equivalent DIP, with a typical thickness that is 70% less. This package has "gull wing" leads protruding from the two long sides and a lead spacing of 0.050 inches.
Small-outline integrated circuit (SOIC) and PLCC packages. In the late 1990s, PQFP and TSOP packages became the most common for high pin count devices, though PGA packages are still often used for high-end microprocessors. Intel and AMD are currently transitioning from PGA packages on high-end microprocessors to land grid array (LGA) packages.
Ball grid array (BGA) packages have existed since the 1970s. Flip-chip Ball Grid Array packages, which allow for much higher pin count than other package types, were developed in the 1990s. In an FCBGA package the die is mounted upside-down (flipped) and connects to the package balls via a package substrate that is similar to a printed-circuit board rather than by wires. FCBGA packages allow an array of input-output signals (called Area-I/O) to be distributed over the entire die rather than being confined to the die periphery.
Traces out of the die, through the package, and into the printed circuit board have very different electrical properties, compared to on-chip signals. They require special design techniques and need much more electric power than signals confined to the chip itself.
When multiple dies are put in one package, it is called SiP, for System In Package. When multiple dies are combined on a small substrate, often ceramic, it's called an MCM, or Multi-Chip Module. The boundary between a big MCM and a small printed circuit board is sometimes fuzzy.
Classification

Integrated circuits can be classified into analog, digital and mixed signal (both analog and digital on the same chip).
Digital integrated circuits can contain anything from one to millions of logic gates, flip-flops, multiplexers, and other circuits in a few square millimeters. The small size of these circuits allows high speed, low power dissipation, and reduced manufacturing cost compared with board-level integration. These digital ICs, typically microprocessors, DSPs, and micro controllers work using binary mathematics to process "one" and "zero" signals.
Analog ICs, such as sensors, power management circuits, and operational amplifiers, work by processing continuous signals. They perform functions like amplification, active filtering, demodulation, mixing, etc. Analog ICs ease the burden on circuit designers by having expertly designed analog circuits available instead of designing a difficult analog circuit from scratch.
ICs can also combine analog and digital circuits on a single chip to create functions such as A/D converters and D/A converters. Such circuits offer smaller size and lower cost, but must carefully account for signal interference.
Friday, April 17, 2009
Advances in integrated circuits

The integrated circuit from an Intel 8742, an 8-bit microcontroller that includes a CPU running at 12 MHz, 128 bytes of RAM, 2048 bytes of EPROM, and I/O in the same chip.
Among the most advanced integrated circuits are the microprocessors or "cores", which control everything from computers to cellular phones to digital microwave ovens. Digital memory chips and ASICs are examples of other families of integrated circuits that are important to the modern information society. While cost of designing and developing a complex integrated circuit is quite high, when spread across typically millions of production units the individual IC cost is minimized. The performance of ICs is high because the small size allows short traces which in turn allows low power logic (such as CMOS) to be used at fast switching speeds.
ICs have consistently migrated to smaller feature sizes over the years, allowing more circuitry to be packed on each chip. This increased capacity per unit area can be used to decrease cost and/or increase functionality—see Moore's law which, in its modern interpretation, states that the number of transistors in an integrated circuit doubles every two years. In general, as the feature size shrinks, almost everything improves—the cost per unit and the switching power consumption go down, and the speed goes up. However, ICs with nanometer-scale devices are not without their problems, principal among which is leakage current (see subthreshold leakage for a discussion of this), although these problems are not insurmountable and will likely be solved or at least ameliorated by the introduction of high-k dielectrics. Since these speed and power consumption gains are apparent to the end user, there is fierce competition among the manufacturers to use finer geometries. This process, and the expected progress over the next few years, is well described by the International Technology Roadmap for Semiconductors (ITRS).
INTEGRATED CIRCUIT
VLSI
Main article: Very-large-scale integration
Upper interconnect layers on an Intel 80486DX2 microprocessor die.
The final step in the development process, starting in the 1980s and continuing through the present, was "very large-scale integration" (VLSI). The development started with hundreds of thousands of transistors in the early 1980s, and continues beyond several billion transistors as of 2007.
There was no single breakthrough that allowed this increase in complexity, though many factors helped. Manufacturing moved to smaller rules and cleaner fabs, allowing them to produce chips with more transistors with adequate yield, as summarized by the International Technology Roadmap for Semiconductors (ITRS). Design tools improved enough to make it practical to finish these designs in a reasonable time. The more energy efficient CMOS replaced NMOS and PMOS, avoiding a prohibitive increase in power consumption. Better texts such as the landmark textbook by Mead and Conway helped schools educate more designers, among other factors.
In 1986 the first one megabit RAM chips were introduced, which contained more than one million transistors. Microprocessor chips passed the million transistor mark in 1989 and the billion transistor mark in 2005[8]. The trend continues largely unabated, with chips introduced in 2007 containing tens of billions of memory transistors [9].
Main article: Very-large-scale integration
Upper interconnect layers on an Intel 80486DX2 microprocessor die.
The final step in the development process, starting in the 1980s and continuing through the present, was "very large-scale integration" (VLSI). The development started with hundreds of thousands of transistors in the early 1980s, and continues beyond several billion transistors as of 2007.
There was no single breakthrough that allowed this increase in complexity, though many factors helped. Manufacturing moved to smaller rules and cleaner fabs, allowing them to produce chips with more transistors with adequate yield, as summarized by the International Technology Roadmap for Semiconductors (ITRS). Design tools improved enough to make it practical to finish these designs in a reasonable time. The more energy efficient CMOS replaced NMOS and PMOS, avoiding a prohibitive increase in power consumption. Better texts such as the landmark textbook by Mead and Conway helped schools educate more designers, among other factors.
In 1986 the first one megabit RAM chips were introduced, which contained more than one million transistors. Microprocessor chips passed the million transistor mark in 1989 and the billion transistor mark in 2005[8]. The trend continues largely unabated, with chips introduced in 2007 containing tens of billions of memory transistors [9].
INTEGRATED CIRCUIT

INVENTION
The integrated circuit was conceived by a radar scientist, Geoffrey W.A. Dummer (1909-2002), working for the Royal Radar Establishment of the British Ministry of Defence, and published at the Symposium on Progress in Quality Electronic Components in Washington, D.C. on May 7, 1952.[1] He gave many symposia publicly to propagate his ideas.
Dummer unsuccessfully attempted to build such a circuit in 1956.
The integrated circuit can be credited as being invented by both Jack Kilby of Texas Instruments[2] and Robert Noyce of Fairchild Semiconductor [3] working independently of each other. Kilby recorded his initial ideas concerning the integrated circuit in July 1958 and successfully demonstrated the first working integrated circuit on September 12, 1958.[2] Kilby won the 2000 Nobel Prize in Physics for his part of the invention of the integrated circuit.[4] Robert Noyce also came up with his own idea of integrated circuit, half a year later than Kilby. Noyce's chip had solved many practical problems that the microchip developed by Kilby had not. Noyce's chip, made at Fairchild, was made of silicon, whereas Kilby's chip was made of germanium.
Early developments of the integrated circuit go back to 1949, when the German engineer Werner Jacobi (Siemens AG) filed a patent for an integrated-circuit-like semiconductor amplifying device [5] showing five transistors on a common substrate arranged in a 2-stage amplifier arrangement. Jacobi discloses small and cheap hearing aids as typical industrial applications of his patent. A commercial use of his patent has not been reported.
A precursor idea to the IC was to create small ceramic squares (wafers), each one containing a single miniaturized component. Components could then be integrated and wired into a bidimensional or tridimensional compact grid. This idea, which looked very promising in 1957, was proposed to the US Army by Jack Kilby, and led to the short-lived Micromodule Program (similar to 1951's Project Tinkertoy).[6] However, as the project was gaining momentum, Kilby came up with a new, revolutionary design: the IC.
The aforementioned Noyce credited Kurt Lehovec of Sprague Electric for the principle of p-n junction isolation caused by the action of a biased p-n junction (the diode) as a key concept behind the IC.[7]
Sunday, April 12, 2009
Typical PC hardware
Motherboard
The motherboard is the "body"[citation needed] of the computer. Components directly attached to the motherboard include:
• The central processing unit (CPU) performs most of the calculations which enable a computer to function, and is sometimes referred to as the "brain" of the computer. It is usually cooled by a heat sink and fan.
• The chipset mediates communication between the CPU and the other components of the system, including main memory.
• RAM Stores all running processes (applications) and the current running OS. RAM Stands for Random Access Memory
• The BIOS includes boot firmware and power management. The Basic Input Output System tasks are handled by operating system drivers.
• Internal Buses connect the CPU to various internal components and to expansion cards for graphics and sound.
o Current
The northbridge memory controller, for RAM and PCI Express
PCI Express, for graphics cards
PCI, for other expansion cards
SATA, for disk drives
o Obsolete
ATA (superseded by SATA)
AGP (superseded by PCI Express)
VLB VESA Local Bus (superseded by AGP)
ISA (expansion card slot format obsolete in PCs, but still used in industrial computers)
• External Bus Controllers support ports for external peripherals. These ports may be controlled directly by the southbridge I/O controller or based on expansion cards attached to the motherboard through the PCI bus.
o USB
o FireWire
o eSATA
Power supply
Main article: Power supply unit (computer)
Includes power cord, switch, and cooling fan. Supplies power at appropriate voltages to the motherboard and internal disk drives. .
Video display controller
Main article: Graphics card
Produces the output for the visual display unit. This will either be built into the motherboard or attached in its own separate slot (PCI, PCI-E, PCI-E 2.0, or AGP), in the form of a graphics card.
Removable media devices
Main article: Computer storage
• CD (compact disc) - the most common type of removable media, suitable for music and data.
o CD-ROM Drive - a device used for reading data from a CD.
o CD Writer - a device used for both reading and writing data to and from a CD.
• DVD (digital versatile disc) - a popular type of removable media that is the same dimensions as a CD but stores up to 12 times as much information. It is the most common way of transferring digital video, and is popular for data storage.
o DVD-ROM Drive - a device used for reading data from a DVD.
o DVD Writer - a device used for both reading and writing data to and from a DVD.
o DVD-RAM Drive - a device used for rapid writing and reading of data from a special type of DVD.
• Blu-ray Disc - a high-density optical disc format for data and high-definition video. Can store 70 times as much information as a CD.
o BD-ROM Drive - a device used for reading data from a Blu-ray disc.
o BD Writer - a device used for both reading and writing data to and from a Blu-ray disc.
• HD DVD - a discontinued competitor to the Blu-ray format.
• Floppy disk - an outdated storage device consisting of a thin disk of a flexible magnetic storage medium. Used today mainly for loading RAID drivers.
• Zip drive - an outdated medium-capacity removable disk storage system, first introduced by Iomega in 1994.
• USB flash drive - a flash memory data storage device integrated with a USB interface, typically small, lightweight, removable, and rewritable. Capacities vary, from hundreds of megabytes (in the same ballpark as CDs) to tens of gigabytes (surpassing, at great expense, Blu-ray discs).
• Tape drive - a device that reads and writes data on a magnetic tape, used for long term storage and backups.
Internal storage
Hardware that keeps data inside the computer for later use and remains persistent even when the computer has no power.
• Hard disk - for medium-term storage of data.
• Solid-state drive - a device similar to hard disk, but containing no moving parts and stores data in a digital format.
• RAID array controller - a device to manage several internal or external hard disks and optionally some peripherals in orfer to achieve performance or reliability improvement in what is called a RAID array.
Sound card
Main article: Sound card
Enables the computer to output sound to audio devices, as well as accept input from a microphone. Most modern computers have sound cards built-in to the motherboard, though it is common for a user to install a separate sound card as an upgrade. Most sound cards, either built-in or added, have surround sound capabilities.
Networking
Main article: Computer networks
Connects the computer to the Internet and/or other computers.
• Modem - for dial-up connections or sending digital faxes. (outdated)
• Network card - for DSL/Cable internet, and/or connecting to other computers, using IEEE 802.3 standards.
• Direct Cable Connection - Use of a null modem, connecting two computers together using their serial ports or a Laplink Cable, connecting two computers together with their parallel ports.
Other peripherals
Input
Main article: Input
• Text input devices
o Keyboard - a device to input text and characters by depressing buttons (referred to as keys), similar to a typewriter. The most common English-language key layout is the QWERTY layout.
• Pointing devices
o Mouse - a pointing device that detects two dimensional motion relative to its supporting surface.
o Optical Mouse - a newer technology that uses lasers, or more commonly LEDs to track the surface under the mouse to determine motion of the mouse, to be translated into mouse movements on the screen.
o Trackball - a pointing device consisting of an exposed protruding ball housed in a socket that detects rotation about two axes.
• Gaming devices
o Joystick - a general control device that consists of a handheld stick that pivots around one end, to detect angles in two or three dimensions.
o Gamepad - a general handheld game controller that relies on the digits (especially thumbs) to provide input.
o Game controller - a specific type of controller specialized for certain gaming purposes.
• Image, Video input devices
o Image scanner - a device that provides input by analyzing images, printed text, handwriting, or an object.
o Webcam - a low resolution video camera used to provide visual input that can be easily transferred over the internet.
• Audio input devices
o Microphone - an acoustic sensor that provides input by converting sound into electrical signals.
o Mic - Converting an autio signal into electrical signal
Output
Main article: Output
• Image, Video output devices
o Printer
o Monitor
• Audio output devices
o Speakers
o Headset
The motherboard is the "body"[citation needed] of the computer. Components directly attached to the motherboard include:
• The central processing unit (CPU) performs most of the calculations which enable a computer to function, and is sometimes referred to as the "brain" of the computer. It is usually cooled by a heat sink and fan.
• The chipset mediates communication between the CPU and the other components of the system, including main memory.
• RAM Stores all running processes (applications) and the current running OS. RAM Stands for Random Access Memory
• The BIOS includes boot firmware and power management. The Basic Input Output System tasks are handled by operating system drivers.
• Internal Buses connect the CPU to various internal components and to expansion cards for graphics and sound.
o Current
The northbridge memory controller, for RAM and PCI Express
PCI Express, for graphics cards
PCI, for other expansion cards
SATA, for disk drives
o Obsolete
ATA (superseded by SATA)
AGP (superseded by PCI Express)
VLB VESA Local Bus (superseded by AGP)
ISA (expansion card slot format obsolete in PCs, but still used in industrial computers)
• External Bus Controllers support ports for external peripherals. These ports may be controlled directly by the southbridge I/O controller or based on expansion cards attached to the motherboard through the PCI bus.
o USB
o FireWire
o eSATA
Power supply
Main article: Power supply unit (computer)
Includes power cord, switch, and cooling fan. Supplies power at appropriate voltages to the motherboard and internal disk drives. .
Video display controller
Main article: Graphics card
Produces the output for the visual display unit. This will either be built into the motherboard or attached in its own separate slot (PCI, PCI-E, PCI-E 2.0, or AGP), in the form of a graphics card.
Removable media devices
Main article: Computer storage
• CD (compact disc) - the most common type of removable media, suitable for music and data.
o CD-ROM Drive - a device used for reading data from a CD.
o CD Writer - a device used for both reading and writing data to and from a CD.
• DVD (digital versatile disc) - a popular type of removable media that is the same dimensions as a CD but stores up to 12 times as much information. It is the most common way of transferring digital video, and is popular for data storage.
o DVD-ROM Drive - a device used for reading data from a DVD.
o DVD Writer - a device used for both reading and writing data to and from a DVD.
o DVD-RAM Drive - a device used for rapid writing and reading of data from a special type of DVD.
• Blu-ray Disc - a high-density optical disc format for data and high-definition video. Can store 70 times as much information as a CD.
o BD-ROM Drive - a device used for reading data from a Blu-ray disc.
o BD Writer - a device used for both reading and writing data to and from a Blu-ray disc.
• HD DVD - a discontinued competitor to the Blu-ray format.
• Floppy disk - an outdated storage device consisting of a thin disk of a flexible magnetic storage medium. Used today mainly for loading RAID drivers.
• Zip drive - an outdated medium-capacity removable disk storage system, first introduced by Iomega in 1994.
• USB flash drive - a flash memory data storage device integrated with a USB interface, typically small, lightweight, removable, and rewritable. Capacities vary, from hundreds of megabytes (in the same ballpark as CDs) to tens of gigabytes (surpassing, at great expense, Blu-ray discs).
• Tape drive - a device that reads and writes data on a magnetic tape, used for long term storage and backups.
Internal storage
Hardware that keeps data inside the computer for later use and remains persistent even when the computer has no power.
• Hard disk - for medium-term storage of data.
• Solid-state drive - a device similar to hard disk, but containing no moving parts and stores data in a digital format.
• RAID array controller - a device to manage several internal or external hard disks and optionally some peripherals in orfer to achieve performance or reliability improvement in what is called a RAID array.
Sound card
Main article: Sound card
Enables the computer to output sound to audio devices, as well as accept input from a microphone. Most modern computers have sound cards built-in to the motherboard, though it is common for a user to install a separate sound card as an upgrade. Most sound cards, either built-in or added, have surround sound capabilities.
Networking
Main article: Computer networks
Connects the computer to the Internet and/or other computers.
• Modem - for dial-up connections or sending digital faxes. (outdated)
• Network card - for DSL/Cable internet, and/or connecting to other computers, using IEEE 802.3 standards.
• Direct Cable Connection - Use of a null modem, connecting two computers together using their serial ports or a Laplink Cable, connecting two computers together with their parallel ports.
Other peripherals
Input
Main article: Input
• Text input devices
o Keyboard - a device to input text and characters by depressing buttons (referred to as keys), similar to a typewriter. The most common English-language key layout is the QWERTY layout.
• Pointing devices
o Mouse - a pointing device that detects two dimensional motion relative to its supporting surface.
o Optical Mouse - a newer technology that uses lasers, or more commonly LEDs to track the surface under the mouse to determine motion of the mouse, to be translated into mouse movements on the screen.
o Trackball - a pointing device consisting of an exposed protruding ball housed in a socket that detects rotation about two axes.
• Gaming devices
o Joystick - a general control device that consists of a handheld stick that pivots around one end, to detect angles in two or three dimensions.
o Gamepad - a general handheld game controller that relies on the digits (especially thumbs) to provide input.
o Game controller - a specific type of controller specialized for certain gaming purposes.
• Image, Video input devices
o Image scanner - a device that provides input by analyzing images, printed text, handwriting, or an object.
o Webcam - a low resolution video camera used to provide visual input that can be easily transferred over the internet.
• Audio input devices
o Microphone - an acoustic sensor that provides input by converting sound into electrical signals.
o Mic - Converting an autio signal into electrical signal
Output
Main article: Output
• Image, Video output devices
o Printer
o Monitor
• Audio output devices
o Speakers
o Headset
Know What Are PC Tools And How Are They Useful In PC
The hardware and software along with a number of peripheral devices constitute a PC System. More specifically a PC system can be divided into four major elements. These elements are: the hardware, the application programs, the operating system and the users. The hardware like, hard disk drives, optical drives (CD or DVD drives), random access memory (RAM), motherboard, monitor, sound devices, keyboard, system bus; application software like word processors, spreadsheets, compilers, web browsers; operating system like windows or linux etc all work together to make a PC system function efficiently.
The hardware- the central processing unit (CPU), the memory, and the input / output devices, provides the basic computing resources for the computer system. The application programs- such as word processors, spreadsheets, compilers and web browsers- define the way in which these resources are used to solve users' computing problems. The Operating System (OS) is a program that manages the computer hardware as well as other resources of the PC System. The operating system also provides a basis for application programs and act as an intermediary between the computer user and the computer hardware (i.e. the PC System). Although the operating system, by itself does no useful function but it provides an environment within which other programs can do useful work. Thus the operating system is an integral part of the PC System.
User programs are executed by the PC system. In a PC system, the hardware and the software work in a synchronized manner. It gives a stable performance and reliability to the system. A PC system processes many modules at a same time. So, it is no idiosyncratic, that a PC system gets crashed. PC systems are always prone to vulnerable errors and faults, which, many a times jeopardize some valuable data and also the performance of the PC system can get affected. Numerous PC Tools are available which are bundled into the operating system, to cope up with these kinds of problems. These PC Tools are programs, which are designed to help the user to rectify the problems, by them.
PC tools can give a lot of valuable information about the state of PC System. This information can be in the form of graph, histogram or report. With the help of this information, problems if any exist, can be found and fixed. Over time when a user adds or removes software, devices and drives, the system will be left with extraneous system registry entries, which can lead to slower performance. A user can use PC Tools that can help to clean the registry entries and make the operating system work faster.
Disk Defragmenter is a PC tool which explores local volumes and merges fragmented files and folders. A PC runs better with regular disk defragmentation. Defragging the hard drive organizes the hard drive so that access to files and programs is more efficient. Other PC tools like Event Viewer, Shared Folders, Local Users and Groups, Performance Logs and Alerts, and Device Manager also help in managing the PC system performance.
The hardware- the central processing unit (CPU), the memory, and the input / output devices, provides the basic computing resources for the computer system. The application programs- such as word processors, spreadsheets, compilers and web browsers- define the way in which these resources are used to solve users' computing problems. The Operating System (OS) is a program that manages the computer hardware as well as other resources of the PC System. The operating system also provides a basis for application programs and act as an intermediary between the computer user and the computer hardware (i.e. the PC System). Although the operating system, by itself does no useful function but it provides an environment within which other programs can do useful work. Thus the operating system is an integral part of the PC System.
User programs are executed by the PC system. In a PC system, the hardware and the software work in a synchronized manner. It gives a stable performance and reliability to the system. A PC system processes many modules at a same time. So, it is no idiosyncratic, that a PC system gets crashed. PC systems are always prone to vulnerable errors and faults, which, many a times jeopardize some valuable data and also the performance of the PC system can get affected. Numerous PC Tools are available which are bundled into the operating system, to cope up with these kinds of problems. These PC Tools are programs, which are designed to help the user to rectify the problems, by them.
PC tools can give a lot of valuable information about the state of PC System. This information can be in the form of graph, histogram or report. With the help of this information, problems if any exist, can be found and fixed. Over time when a user adds or removes software, devices and drives, the system will be left with extraneous system registry entries, which can lead to slower performance. A user can use PC Tools that can help to clean the registry entries and make the operating system work faster.
Disk Defragmenter is a PC tool which explores local volumes and merges fragmented files and folders. A PC runs better with regular disk defragmentation. Defragging the hard drive organizes the hard drive so that access to files and programs is more efficient. Other PC tools like Event Viewer, Shared Folders, Local Users and Groups, Performance Logs and Alerts, and Device Manager also help in managing the PC system performance.
Saturday, April 11, 2009
about computers
Microprocessors
On November 15, 1971, Intel released the world's first commercial microprocessor, the 4004. It was developed for a Japanese calculator company, Busicom, as an alternative to hardwired circuitry, but computers were developed around it, with much of their processing abilities provided by a single small microprocessor chip. Coupled with one of Intel's other products - the RAM chip, based on an invention by Robert Dennard of IBM, (kilobits of memory on a single chip) - the microprocessor allowed fourth generation computers to be smaller and faster than previous computers. The 4004 was only capable of 60,000 instructions per second, but its successors, the Intel 8008, 8080 (used in many computers using the CP/M operating system), and the 8086/8088 family (the IBM PC and compatibles use processors still backwards-compatible with the 8086) brought ever-increasing speed and power to the computers. Other manufacturers also produced microprocessors which were widely used in microcomputers.
Supercomputers
.
At the other end of the computing spectrum from the microcomputers, the powerful supercomputers of the era also used integrated circuit technology. In 1976 the Cray-1 was developed by Seymour Cray, who had left Control Data in 1972 to form his own company. This machine, the first supercomputer to make vector processing practical, had a characteristic horseshoe shape, to speed processing by shortening circuit paths. Vector processing, which uses a single instruction to perform the same operation on many arguments, has been a fundamental supercomputer processing method ever since. The Cray-1 could calculate 150 million floating point operations per second (150 megaflops). 85 were shipped at a price of $5 million each. The Cray-1 had a CPU that was mostly constructed of ECL SSI/MSI circuits.
Mainframes and minicomputers
Time shared computer terminals connected to central computers, such as the TeleVideo ASCII character mode smart terminal pictured here, were sometimes used before the advent of the PC.
Before the introduction of the microprocessor in the early 1970s, computers were generally large, costly systems owned by large institutions: corporations, universities, government agencies, and the like. Users—who were experienced specialists—did not usually interact with the machine itself, but instead prepared tasks for the computer on off-line equipment, such as card punches. A number of assignments for the computer would be gathered up and processed in batch mode. After the jobs had completed, users could collect the output printouts and punched cards. In some organizations it could take hours or days between submitting a job to the computing center and receiving the output.
A more interactive form of computer use developed commercially by the middle 1960s. In a time-sharing system, multiple teletype terminals let many people share the use of one mainframe computer processor. This was common in business applications and in science and engineering.
A different model of computer use was foreshadowed by the way in which early, pre-commercial, experimental computers were used, where one user had exclusive use of a processor.[2] Some of the first computers that might be called "personal" were early minicomputers such as the LINC and PDP-8, and later on VAX and larger minicomputers from Digital Equipment Corporation (DEC), Data General, Prime Computer, and others. They originated as peripheral processors for mainframe computers, taking on some routine tasks and freeing the processor for computation. By today's standards they were physically large (about the size of a refrigerator) and costly (typically tens of thousands of US dollars), and thus were rarely purchased by individuals. However, they were much smaller, less expensive, and generally simpler to operate than the mainframe computers of the time, and thus affordable by individual laboratories and research projects. Minicomputers largely freed these organizations from the batch processing and bureaucracy of a commercial or university computing center.
In addition, minicomputers were more interactive than mainframes, and soon had their own operating systems. The minicomputer Xerox Alto (1973) was a landmark step in the development of personal computers, because of its graphical user interface, bit-mapped high resolution screen, large internal and external memory storage, mouse, and special software.[3]
Microprocessor and cost reduction
The Apple II, one of the "1977 Trinity". The drive shown is a model designed for the Apple III.
The minicomputer ancestors of the modern personal computer used integrated circuit (microchip) technology, which reduced size and cost, but processing was carried out by circuits with large numbers of components arranged on multiple large printed circuit boards before the introduction of the microprocessor. They were consequently physically large and expensive to manufacture. After the "computer-on-a-chip" was commercialized, the cost to manufacture a computer system dropped dramatically. The arithmetic, logic, and control functions that previously occupied several costly circuit boards were now available in one integrated circuit which was very expensive to design but very cheap to manufacture in large quantities. Concurrently, advances in the development of solid state memory eliminated the bulky, costly, and power-hungry magnetic core memory used in prior generations of computers.
There were a few researchers at places such as SRI and Xerox PARC who were working on computers that a single person could use and could be connected by fast, versatile networks: not home computers, but personal ones.
Altair 8800 and IMSAI 8080
Main articles: Altair 8800 and IMSAI 8080
Development of the single-chip microprocessor was an enormous catalyst to the popularization of cheap, easy to use, and truly personal computers. The Altair 8800, introduced in a Popular Electronics magazine article in the January 1975 issue, at the time set a new low price point for a computer, bringing computer ownership to an admittedly select market in the 1970s. This was followed by the IMSAI 8080 computer, with similar abilities and limitations. The Altair and IMSAI were essentially scaled-down minicomputers and were incomplete: to connect a keyboard or teletype to them required heavy, expensive "peripherals". These machines both featured a front panel with switches and lights, which communicated with the operator in binary. To program the machine after switching it on the bootstrap loader program had to be entered, without error, in binary, then a paper tape containing a BASIC interpreter loaded from a paper-tape reader. Keying the loader required setting a bank of eight switches up or down and pressing the "load" button, once for each byte of the program, which was typically hundreds of bytes long. The computer could run BASIC programs once the interpreter had been loaded.
The MITS Altair, the first commercially successful microprocessor kit, was featured on the cover of Popular Electronics magazine in January 1975. It was the world's first mass-produced personal computer kit, as well as the first computer to use an Intel 8080 processor. It was a commercial success with 10,000 Altairs being shipped. The Altair also inspired the software development efforts of Paul Allen and his high school friend Bill Gates who developed a BASIC interpreter for the Altair, and then formed Microsoft.
The MITS Altair 8800 effectively created a new industry of microcomputers and computer kits, with many others following, such as a wave of small business computers in the late 1970s based on the Intel 8080, Zilog Z80 and Intel 8085 microprocessor chips. Most ran the CP/M-80 operating system developed by Gary Kildall at Digital Research. CP/M-80 was the first popular microcomputer operating system to be used by many different hardware vendors, and many software packages were written for it, such as WordStar and dBase II.
Many hobbyists during the mid 1970s designed their own systems, with various degrees of success, and sometimes banded together to ease the job. Out of these house meetings the Homebrew Computer Club developed, where hobbyists met to talk about what they had done, exchange schematics and software, and demonstrate their systems. Many people built or assembled their own computers as per published designs. For example, many thousands of people built the Galaksija home computer later in the early 80s.
It was arguably the Altair computer that spawned the development of Apple, as well as Microsoft which produced and sold the Altair BASIC programming language interpreter, Microsoft's first product. The second generation of microcomputers — those that appeared in the late 1970s, sparked by the unexpected demand for the kit computers at the electronic hobbyist clubs, were usually known as home computers. For business use these systems were less capable and in some ways less versatile than the large business computers of the day. They were designed for fun and educational purposes, not so much for practical use. And although you could use some simple office/productivity applications on them, they were generally used by computer enthusiasts for learning to program and for running computer games, for which the personal computers of the period were less suitable and much too expensive. For the more technical hobbyists home computers were also used for electronics interfacing, such as controlling model railroads, and other general hobbyist pursuits.
On November 15, 1971, Intel released the world's first commercial microprocessor, the 4004. It was developed for a Japanese calculator company, Busicom, as an alternative to hardwired circuitry, but computers were developed around it, with much of their processing abilities provided by a single small microprocessor chip. Coupled with one of Intel's other products - the RAM chip, based on an invention by Robert Dennard of IBM, (kilobits of memory on a single chip) - the microprocessor allowed fourth generation computers to be smaller and faster than previous computers. The 4004 was only capable of 60,000 instructions per second, but its successors, the Intel 8008, 8080 (used in many computers using the CP/M operating system), and the 8086/8088 family (the IBM PC and compatibles use processors still backwards-compatible with the 8086) brought ever-increasing speed and power to the computers. Other manufacturers also produced microprocessors which were widely used in microcomputers.
Supercomputers
.
At the other end of the computing spectrum from the microcomputers, the powerful supercomputers of the era also used integrated circuit technology. In 1976 the Cray-1 was developed by Seymour Cray, who had left Control Data in 1972 to form his own company. This machine, the first supercomputer to make vector processing practical, had a characteristic horseshoe shape, to speed processing by shortening circuit paths. Vector processing, which uses a single instruction to perform the same operation on many arguments, has been a fundamental supercomputer processing method ever since. The Cray-1 could calculate 150 million floating point operations per second (150 megaflops). 85 were shipped at a price of $5 million each. The Cray-1 had a CPU that was mostly constructed of ECL SSI/MSI circuits.
Mainframes and minicomputers
Time shared computer terminals connected to central computers, such as the TeleVideo ASCII character mode smart terminal pictured here, were sometimes used before the advent of the PC.
Before the introduction of the microprocessor in the early 1970s, computers were generally large, costly systems owned by large institutions: corporations, universities, government agencies, and the like. Users—who were experienced specialists—did not usually interact with the machine itself, but instead prepared tasks for the computer on off-line equipment, such as card punches. A number of assignments for the computer would be gathered up and processed in batch mode. After the jobs had completed, users could collect the output printouts and punched cards. In some organizations it could take hours or days between submitting a job to the computing center and receiving the output.
A more interactive form of computer use developed commercially by the middle 1960s. In a time-sharing system, multiple teletype terminals let many people share the use of one mainframe computer processor. This was common in business applications and in science and engineering.
A different model of computer use was foreshadowed by the way in which early, pre-commercial, experimental computers were used, where one user had exclusive use of a processor.[2] Some of the first computers that might be called "personal" were early minicomputers such as the LINC and PDP-8, and later on VAX and larger minicomputers from Digital Equipment Corporation (DEC), Data General, Prime Computer, and others. They originated as peripheral processors for mainframe computers, taking on some routine tasks and freeing the processor for computation. By today's standards they were physically large (about the size of a refrigerator) and costly (typically tens of thousands of US dollars), and thus were rarely purchased by individuals. However, they were much smaller, less expensive, and generally simpler to operate than the mainframe computers of the time, and thus affordable by individual laboratories and research projects. Minicomputers largely freed these organizations from the batch processing and bureaucracy of a commercial or university computing center.
In addition, minicomputers were more interactive than mainframes, and soon had their own operating systems. The minicomputer Xerox Alto (1973) was a landmark step in the development of personal computers, because of its graphical user interface, bit-mapped high resolution screen, large internal and external memory storage, mouse, and special software.[3]
Microprocessor and cost reduction
The Apple II, one of the "1977 Trinity". The drive shown is a model designed for the Apple III.
The minicomputer ancestors of the modern personal computer used integrated circuit (microchip) technology, which reduced size and cost, but processing was carried out by circuits with large numbers of components arranged on multiple large printed circuit boards before the introduction of the microprocessor. They were consequently physically large and expensive to manufacture. After the "computer-on-a-chip" was commercialized, the cost to manufacture a computer system dropped dramatically. The arithmetic, logic, and control functions that previously occupied several costly circuit boards were now available in one integrated circuit which was very expensive to design but very cheap to manufacture in large quantities. Concurrently, advances in the development of solid state memory eliminated the bulky, costly, and power-hungry magnetic core memory used in prior generations of computers.
There were a few researchers at places such as SRI and Xerox PARC who were working on computers that a single person could use and could be connected by fast, versatile networks: not home computers, but personal ones.
Altair 8800 and IMSAI 8080
Main articles: Altair 8800 and IMSAI 8080
Development of the single-chip microprocessor was an enormous catalyst to the popularization of cheap, easy to use, and truly personal computers. The Altair 8800, introduced in a Popular Electronics magazine article in the January 1975 issue, at the time set a new low price point for a computer, bringing computer ownership to an admittedly select market in the 1970s. This was followed by the IMSAI 8080 computer, with similar abilities and limitations. The Altair and IMSAI were essentially scaled-down minicomputers and were incomplete: to connect a keyboard or teletype to them required heavy, expensive "peripherals". These machines both featured a front panel with switches and lights, which communicated with the operator in binary. To program the machine after switching it on the bootstrap loader program had to be entered, without error, in binary, then a paper tape containing a BASIC interpreter loaded from a paper-tape reader. Keying the loader required setting a bank of eight switches up or down and pressing the "load" button, once for each byte of the program, which was typically hundreds of bytes long. The computer could run BASIC programs once the interpreter had been loaded.
The MITS Altair, the first commercially successful microprocessor kit, was featured on the cover of Popular Electronics magazine in January 1975. It was the world's first mass-produced personal computer kit, as well as the first computer to use an Intel 8080 processor. It was a commercial success with 10,000 Altairs being shipped. The Altair also inspired the software development efforts of Paul Allen and his high school friend Bill Gates who developed a BASIC interpreter for the Altair, and then formed Microsoft.
The MITS Altair 8800 effectively created a new industry of microcomputers and computer kits, with many others following, such as a wave of small business computers in the late 1970s based on the Intel 8080, Zilog Z80 and Intel 8085 microprocessor chips. Most ran the CP/M-80 operating system developed by Gary Kildall at Digital Research. CP/M-80 was the first popular microcomputer operating system to be used by many different hardware vendors, and many software packages were written for it, such as WordStar and dBase II.
Many hobbyists during the mid 1970s designed their own systems, with various degrees of success, and sometimes banded together to ease the job. Out of these house meetings the Homebrew Computer Club developed, where hobbyists met to talk about what they had done, exchange schematics and software, and demonstrate their systems. Many people built or assembled their own computers as per published designs. For example, many thousands of people built the Galaksija home computer later in the early 80s.
It was arguably the Altair computer that spawned the development of Apple, as well as Microsoft which produced and sold the Altair BASIC programming language interpreter, Microsoft's first product. The second generation of microcomputers — those that appeared in the late 1970s, sparked by the unexpected demand for the kit computers at the electronic hobbyist clubs, were usually known as home computers. For business use these systems were less capable and in some ways less versatile than the large business computers of the day. They were designed for fun and educational purposes, not so much for practical use. And although you could use some simple office/productivity applications on them, they were generally used by computer enthusiasts for learning to program and for running computer games, for which the personal computers of the period were less suitable and much too expensive. For the more technical hobbyists home computers were also used for electronics interfacing, such as controlling model railroads, and other general hobbyist pursuits.
Subscribe to:
Posts (Atom)