In this section, we are going to look at how the hardware of a computer is structured and works. In particular, from Section 1 to Section 3, we are going to look into the "meeting point" between hardware and software; that is, how it is possible to make the hardware reason through the use of software. In Section 4, the Von Neumann's machine is going to be described by setting out the general structure of all modern computers and clearing up the meaning of "automatic", "algorithm" and "machine" concepts previously introduced. In Section 5-8, we are going to describe the hardware of modern computers and the computer's "five senses" (that is, the equipments that enable a computer to interact with the external world and, in particular, with the user (anyone using a computer)).

1. The binary language and its representation in the hardware.

As human beings, we reason and think in an idiom composed of certain words belonging to a certain alphabet (This idiom might be Italian if we are Italians, English if we are English, Chinese if we are Chinese, etc.). A computer "reasons" in the same way by using various idioms which are all composed of binary alphabet words consisting of only two letters which are 0 and 1. In these terms, computers "think and/or reason" in binary.

With regard to computers, an idiom is called code and a letter of a code word is called bit. More specifically, 1 bit is a letter or binary variable (which, therefore, can exclusively assume 0 or 1 as value) and it represents the basic unit of measure for the amount of data.

Other fundamental amount of data units of measure are:

1 byte = 8 bits   = 23 bits
1 Kilo byte = 1024 bytes = 210 bytes = 213 bits
1 Mega byte = 1024 Kilo bytes = 210 Kilo bytes = 223 bits
1 Giga byte = 1024 Mega bytes = 210 Mega bytes = 233 bits
1 Tera byte = 1024 Giga bytes = 210 Giga bytes = 243 bits
1 Peta byte = 1024 Tera bytes = 210 Tera bytes = 253 bits

Human beings usually represent natural numbers in the decimal system as sums of powers of 10 with coefficients 0,1,...,9 called decimal digits. For instance, the number 89 decimal representation is 89 = 8⋅101 + 9⋅100. The right-most digit is the number of decimal units of the number, whereas, the left one, just next to the former, is the number of decimal tens, and so on as regard the hundred, thousand, etc.

Vice-versa, a finite sequence of decimal digits (that is symbols from 0 to 9) can be conceived as the decimal representation of a number: the one whose number of decimal units is equal to the right-most digit of the sequence and whose number of decimal tens is equal to the digit just at the left of the units and so on as regard the hundred, thousand, etc.

Since the alphabet used by computers is the bynary one, each natural number is represented internally with the binary system. In a nutshell, they are represented as sums of powers of 2 with coefficients of 0 and 1. For example, the number 89 binary representation is:

89 = 1⋅26 + (89 - 64)
  = 1⋅26 + 25
  = 1⋅26 + 0⋅25 + 25
  = 1⋅26 + 0⋅25 + 1⋅24 + (25-16)
  = 1⋅26 + 0⋅25 + 1⋅24 + 9
  = 1⋅26 + 0⋅25 + 1⋅24 + 1⋅23 + (9-8)
  = 1⋅26 + 0⋅25 + 1⋅24 + 1⋅23 + 1
  = 1⋅26 + 0⋅25 + 1⋅24 + 1⋅23 + 0⋅22 + 0⋅21 + 1⋅20
  = 1011001.

Note that the binary number system works exactly as the decimal number sytem, the only difference is the base: in the binary system, numbers are expressed as sums of powers of 2 instead of 10. As in case of the decimals, a finite sequence of binary digits (that is, symbols from 0 to 1) might be conceived as a representation (binary one) of a number.

For instance, the binary sequence 1001011 represents the decimal number (i. e., the number in its decimal representation)

1001011 = 1⋅26 + 0⋅25 + 0⋅24 + 1⋅23 + 0⋅22 + 1⋅21 + 1⋅20
  = 1⋅64 + 0⋅32 + 0⋅16 + 1⋅8 + 0⋅4 + 1⋅2 + 1⋅1
  = 64 + 8 + 2 + 1
  = 75.

In this way, any number is independent by its representation and, given a base bIN - {0} (that is, b natural number greater than 0), it can be uniquely represented with a sequence of digits from 0 to b-1 (called the number b-ary representation). For these reason, it is possible to assume that a computer thinks in terms of natural numbers because thinking of any number is equivalent to thinking its binary sequences representation (where b=2).

Each computer hardware component is essentially made up of electrical circuits, that is, transistors (made of semi-conductor material), linked to one another by metal wires (aluminum or copper) called electrical wires. The electrical wires store binary letters composing a code word, or convey the letters of a code word from one transistor to anothers.

Transistors enable to manipulate (change or not) binary letters which are stored and/or conveyed by wires. Now, binary letters are represented in wires by two particular physical states: by convention, a wire conveys and/or stores the letter 0 if its electrical potential is GND = 0 volt and conveys and/or memorizes the letter 1 if its electrical potential is VDD (nowadays, the electrical potential value commonly used to represent the binary letter 1 is about VDD = 3 volt, five years ago was VDD = 5 volt and in five years will be VDD = 2,5 volt or less. Reduction is due to speed and energy consumption issues).

Note that it is just this one the contact point between hardware and software: hardware is composed of electrical wires, whereas software is the physical state (specifically, of potential value) of the electrical wires. We say that electrical circuits in which, by convention, only a finite number of physical states makes sense are called digital circuits, whereas for those of which any physical state (provided that it does not make the machine blow up) makes sense are called, instead, analogical circuits.

A computer hardware is mostly composed of binary digital circuits (that is, systems which can assume exactly two distinct states).

2. The transistors.

In this section, we say that a transistor is a device connected to 3 wires: one is called input and the other two outputs. Now, if in the input wire there is 0 (or 1), it means the two output wires are not connected to each other. If in the input wire there is 1 (or 0), it means the two output wires are connected to each other. Therefore, it seems to be appropriate to say that a transistor is an electrical switch electronically controlled (and, unlike the usual lightbulb switch, not manually controlled by simply pushing a button).

Thus, there are two types of transistors: the first one disconnects the two output wires if there is 0 in the input wire; the second one, on the contrary, disconnects the output wires if there is 1 in the input wire.

The first transistor type is called NPN (or also N type), whereas the second type is called PNP (or also P type). The two transistor types behave in a complementary way and are graphically represented as in Figure 1.


Figure 1: stick diagrams representing the two types of transistors.

3. The Boolean gates

In this section, we look at how it is possible to link transistors to one another in order to manipulate the 0 and 1 binary letters and, for example, implement some circuits, called Boolean's gates, which compute the fundamental logical connectives: NOT, AND and OR.

The NOT connective represents the denial and it is a function of a binary variable which associates NOT(0) = 1 with 0 and NOT(1) = 0 with 1.

This connective is represented by the following truth table:

x NOT(x)
0 1
1 0

The AND connective represents the conjunction and it is a function of two binary variables defined by the following thruth table:

x y AND(x,y)
0 0 0
0 1 0
1 0 0
1 1 1

The OR connective represents the conjunction and it is a function of two binary variables defined by the following truth table:

x y OR(x,y)
0 0 0
0 1 1
1 0 1
1 1 1

The connectives NOT, AND, and OR are called fundamental because by composing these functions it is possible to express all the possible functions of binary variables with binary values. A way to realize a Boolean gate which computes the logic connective NOT (called inverter) is represented in Figure 2. It is defined in the so called CMOS technology (Complementary Metal Oxide Semiconductor). As we can see in Figure 2, an inverter is composed of a PNP (above) and a NPN transistor (below). The inputs of the two transistors are connected to one another in a wire which conveys the input value x. The output above the PNP transistor has VDD volts (which corresponds to the binary logical value 1). The output below the PNP and the output above the NPN transistor are connected to one another in a wire which conveys the output value NOT(x).

Inverter
Figure 2: NOT gate or inverter.

Let's look at how the circuit in Figure 2 works! If the input x is 0, then the PNP transistor has its own output wires connected and the NPN transistor has its own output wires disconnected. Therefore, the output of the inverter is connected to the up output of the PNP transistor and, consequently, the output of the inverter is connected to the VDD wire. This implies that the electrical potential of the inverter output is VDD, which represents the logical value 1. So, if the input wire in the inverter conveys 0 then the ouput wire must convey 1. Vice-versa, if the input x is 1, the PNP transistor has its output wires disconnected and the NPN transistor has its output wires connected, the output of the inverter is connected to the bottom output of the NPN transistor and, consequently, the output of the inverter is connected to the GND wire. This implies that the electrical potential of the inverter output is GND, which represents the logical value 0. So, if the input wire in the inverter conveys 1 then the output wire must convey 0. Figures 3 shows various representations of the NOT gate.

NOT gate rapresentations
Figure 3: various representations of the NOT gate.

It is possible to design similar circuits for the AND and OR logical connectives and for the memory cells (that is, circuits which store binary letters, or, data bits).

4. The Von Neumann's architecture

All of the most modern computers are structured according to the Von Neumann's machine model and are realized with a technology called VLSI (Very Large Scale Integration). This technology enables to miniaturize electrical circuits. In VLSI, all electrical circuits making up the computer logic (or a system in general) are split up into various rectangular tiny pieces of silicon, called chips, which are connected eachother through electrical wires and placed into a printed circuit or card (circuit board). Nowadays, a chip may contain up to 1,200 million = 1.2 billion (for example, the Dual-Core Intel Itanium 2 chip by Intel) transistors and its size goes up to 1,5 cm2.

All this srinking is possible because the smallest shape that can be drawn on a chip today (called, the minimum feature patternable) measures λ = 45 nm (1 nm = 1 nanometer = a billionth of a meter = a millesimal of micron. Note that a virus size is about 18-20 nm). It matters to note that the parameter λ tends to be downsized due to the VLSI technology development and that there will be soon processors on the market produced with a λ = 10 nm process.

Let us show what is the Von Neumann machine; thus clarifying the concept of "automatic" without ambiguity. We give the following Definition.

Definition of Von Neumann machine (practical model of computation):The Von Neumann machine is a triplet:

(IN,IS,P)

where,

IN = {0,1,2,...} is the set of natural numbers and represents the machine alphabet. Note that this component is a costant that means it is the same for every machines.

IS = {ZERO, INC, SUM, SUB, MUL, DIV, EQUAL, MINOR, CJUMP, HALT} (Instruction Set) is the set of generic instructions of the machine. Note that this component is also constant and, so, equal for every machine. This set of generic instructions is defined as follow. The first eight instructions implement functions with values in IN and are defined as follows.

ZERO: IN IN
  n ZERO(n)=0,
 
INC: IN IN
  n INC(n)=n+1,
 
SUM: IN x IN IN
  (n,m) SUM(n,m)=n+m,
 
SUB: IN x IN IN  
  (n,m) SUB(n,m)= {
n-m   if n>m,
0   if nm,
 
MUL: IN x IN IN
  (n,m) MUL(n,m)=n×m,
 
DIV: IN×IN IN  
  (n,m) DIV(n,m)= {
n/m   if m>0,
0   if m=0;

where x indicates the integer part of the real number x; that is, the greater integer number less than or equal to x. Also,

EQUAL: IN x IN IN  
  (n,m) EQUAL(n,m)= {
1   if n=m,
0   if nm,
 
MINOR: IN x IN IN  
  (n,m) MINOR(n,m)= {
1   if n<m,
0   if nm.

The ninth instruction, CJUMP, is a special instruction, called conditioned jump instruction. It is a function that associates a natural number and an index, called address of P, with the action of "jumping to the instruction in P" specified by the index. This, conditionally to the fact that the natural number is different from zero or not. Formally, if we let J = {j∈IN: 0≤j≤|P|-1} be a finite set of indices then,

CJUMP: IN x J Action  
  (n,j) CJUMP(n,j)= {
jump to the j-th instruction of P   if n>0,
do notting   if n=0.

The tenth instruction, HALT, is a constant equal to the action of stop, ending the machine computation.

P = {I0, I1, I2, ..., I |P| - 1} is a non-empty finite sequence of specific instructions, called machine program. With specific instruction we mean any instruction (that is, element of IS) such that particular values to the independent variables have been specified/given. The set of these values, for all specific instructions in the program, is called the set of program data.

It is assumed that the machine computation takes place by carrying out the program instructions, in the order defined by the program itself (starting from the first instruction, then the second, etc.), unless a conditional jump occurs.

All this in the following way.


Figure 4: Hardware of the Von Neumann machine.

It is imagined that the hardware is composed by a memory and a processor, as in Figure 4. The memory is a system composed by a sufficiently large sequence of memory locations, each containing a natural number. Nowadays, every computer has a 64 bit architecture, and so, each memory location is constituted by 64 bistable circuits (i. e., light bulbs) and therefore may contain 64 bits = 8 bytes of data. This means that, in reality, each memory location may only contain numbers ranging from 0 to 264 - 1. Each memory location is associated with the index of the location in the sequence, called location address. The program and its data are stored in the memory itself. The program (or, rather, a natural number program encoding) is stored in the memory locations whose addresses range from 0 to |P| - 1 so that the i-th program instruction is stored in the memory location whose address is i. The data necessary for the program execution, which are program data, are recorded in another part of the memory. The processor is a system composed of four fundamental parts: the Program Counter, the Instruction Register, the Arithmetic and Logic Unit and the Control Unit. The program Counter is a memory location that contains the address of the current instruction to be executed. The Instruction Register is a memory location that contains the current instruction to be executed. The Arithmetic and Logic Unit is a system that executes the instruction indicated by the PC and stored in the IR. The Control is a system that, through a sequence of state changes, generates certain control signals which "excites and/or inhibits" appropriately certain parts of the processor (including the ALU) in such a way that the execution of an instruction takes place through the execution of the following actions/algorithm.

"Execution of an Instruction" Algorithm:

A1: Read the PC content.

A2: Fetch into the Processor the instruction contained in the memory location program, whose address is contained in the PC.

A3: Decode the instruction, that is,

A3.1: "understand" what instruction is among the ten possible, that is it sends signals to the ALU (and eventually to other parts of the processor) so that it executes the characteristic function of the given instruction, and

A3.2: acquire data from the memory, specified by the instruction in order to carry it out. In a program specific instruction, data are specified in an indirect way through the addresses of the memory locations containing the data instructions. For example, a program instruction may be SUM(M2,M0), where M2 and M0 are the two addresses of the memory location containing the two numbers (data) to be added. An exception to this is the CJUMP instruction as regards the second variable (operand), j, which is given directly. For example, a CJUMP specific operation may be CJUMP(M1,12302).

A4: Wait for the ALU to compute the result.

A5: Store the result in the memory location specified by the left-most operand of the instruction. If, for example, the instruction in execution is SUM(M2,M0), the control stores the result in the memory location whose address is M2. In case the instruction in execution is CJUMP(M1,j) and the content, n, of memory location with address M1 is different from zero, the control stores the number j (which is the address of the next instruction to be executed) in the PC. Note: machines whose operands and results are directly taken from/stored to memory (as in step A3.2/A5) are called memory-memory machines.

A6: Increment the PC by 1, unless the instruction being executed is not an HALT or a CJUMP(M1,j) the content of memory location with address M1 is non-zero.

The above "Execution of an Instruction Algorithm" is repeated over and over again until the HALT instruction is encountered, whereby the machine stops. Note that the algorithm is universal in the sense that it does not depend on the program. In this way, the hardware of the von Neuman machine is universal (that is, a general porpose machine). In other words, what the machine computes depend only on the program which, being stored in memory, is software.

In modern computers a chip contains the CPU (and possibly part of the memory as well), whereas another chip (or several chips) contains the memory. Communications between the CPU and the memory (represented by arrows in Figure 4) takes place through a communication channel, called bus, consisting of several parallel wires.

Let us make an example of a von Neumann machine program that computes whether a number is divisible by 4 or not. Note that a number n is divisible by 4 if, and only if, the remainder, r, of the division between n and 4 is 0. Note that this remainder is simply given by r = n - 4⋅n/4. Let us assume that the number n is stored in the memory location M0. The following program writes the result in the memory location M1. In particular, it writes 0 if the number is not divisible by 4 and writes 1 if the number is divisible by 4.