image block
621 Views 0 Comments
Hello, everyone!
I'm going to start a new topic in my posts related to computation structures. I hope this is idea will successfully be completed and in the finish of topic, we will know everything about computers, how they are storing and working with information, which types of memory computer has and how processes know where a need to find needed data and get permissions on it, etc.

Information

Information is that which informs. I understand it sounds crazy, but let's have a look from another side. Information is the thing which gives us answer some question.
From the engineering view, information is presented as data communicated or received that resolves uncertainty about a particular fact or circumstance.
Like the next step, I wanted to review "Information theory" by Claude Shannon which is using on everywhere. Thanks to this theory we can calculate losses in data compressions or how to avoid it or we can calculate channel capacity for reliably transmitted over a communications channel. But to the clear understanding of the full scope of our topic, need to start from the general definitions and their meaning.

A bit of math

If we want to understand how working computer, need to know how he is "thinking". There is represented simple examples of algebraic functions, which will be used below. I know that everyone knows it ;) but let's keep it here.
Factorial - is the product of all positive integers less than or equal to n.
The main formula is :
Example :
NOTICE: via this function, we can operate only with positive values, because negative numbers would require a division by zero.
Logarithm - the vice versa function of exponentiation. As result of the logarithm of a number will be exponent to which another fixed number.
For example, exponentiation of 2 in the exponent of 4. (The exponent is shown as a superscript to the right of the base)
Therefore the logarithm is :

Units of measurement of digital information

Bit - the smallest unit of storage.
Everything in the computer it's all 0's and 1's, so the smallest part of measurement can have only two variants, this is called a Bit.
Anything with two separate states can store 1 bit. For example in a hard drive spot of magnetism can be North or South (0/1), or in chips, electric charge can be 0/1.
Byte - is a unit that consists of eight bits.

Therefore thus one byte has 256 patterns ( the number between 0 and 255). The space that data takes up in the computer is measured in by the "byte". One byte is big enough to hold a single typed character (ASCII) or for example, the red/green/blue numbers of a pixel are each stored in one byte. If we are talking about "Unicode" character, each of them stored in two bytes.
So, based on this information we can understand definitions like kilobyte (1KB = thousand Bytes), megabytes (1MB = 1 million Bytes), GB (1GB = 1 billion Bytes) etc.

Quantifying Information

The main point of the convention of information to data, that the more you know, then the less information need to transport. Exactly this is what "Information theory". I will try to explain it on the example of playing cards.
We have 54 playing cards.
Case 1: You know which card is Ace of spades
Case 2: You know all spades cards
Case 3: You know that not the Ace of spades
So, which one case is more informative for us? Let's see. The formula of received information when learning that choice:
Where
N - is count of possible variants.
M- is count of known variants.
For example, how much information in one coin flip? If we have chance 50/50, so we have N is 2, and M is 1.
Due this bit.
Like we have figured out before, one bit can have two patterns (0 or 1) and the coin have two variants (Tails or Heads), so it possible that to store this information need one bit. Is it logical?
So back to our cases, let's review them one by one :
Case 1: bits or full 6 bits.
Case 2: bits.
Case 3: bits or full 1 bit.

Entropy

One from important parts in information sphere is Entropy.
Entropy - is an average amount of information contained on each piece of data, produced by a probabilistic stochastic source of data.
So. what does it mean?It's hard to understand, so I will try to put more examples for more clear understanding of this topic.
The measure of information entropy associated with each possible data value, so when data source has a lower-probability value, that this event is occupies more information. The amount of information transmitted by each event defined in this way becomes a random value, the expected value of which is the information entropy.
Entropy is calculating by the next formulas :

or

This is the same formula and results from them will be the same.
At first, let's review event with coin (just reminding chances 50/50). So, our "X" will be X = {1/2 , 1/2}



bit
It's not data, it's an amount of information. In a way, the amount of information in a message is a measure of how unexpected, how unique, it is.
Let's review a more illustrative case. For example, we have 4 departments with names :
A: 00111000101010100100011111
B: 11001101010011100111001
C: 1110000110010010
D: 1100010100100101010100100100101
If we calculate we will know that A departament has message (name) belong is 26 bits long, B - 23 bits and so on.
And like next step let's assign to each of departments, people. In total there are 100 employees, 25 work in A, 10 work in B, 50 work in C and 15 work in D.
We meet a pretty girl, but we don't know on which department she is working, so the probability of 0.25 that she works in department A. The department A has an amount of information 3.32 bits (by information theory). This is real data.
But after we met with her, we have found out, that she is working on department B. The length of the message "department B" is 23 bits in this particular scheme and an amount of information of B department is 2 bits. Calculating is below.
A : bits
B : bits
And now let's calculate the Entropy of this event.

bits
So, from the example, we see that the amount of information of this case is 1.78 bits.

It was introduction post on a big topic "Computing Structures" and for now that's it. In the next post, I want to talk about the encoding, errors detection, and role of Huffman's algorithm in digital circuits.
If you want to change something or that I focus on something else, let me know, please.

Thank you all for the attention,
- Kostia

0 Comments


    Leave a Comment