Throughout the book there is careful explanation of how the governing equations are formulated, the simplest methods of solution and the best ways of drawing rapid conclusions from the results.
This book is intended as an elementary introduction to information theory for scientists and engineers who have no specialized knowledge of statistics. It does not require a detailed acquaintance with communication systems and most of the mathematical background will be familiar to undergraduates in science and technology. Nevertheless the techniques and concepts involved are sufficient to challenge the student. The reader who acquires a full understanding of the ideas will have a stepping stone to more advanced courses in cryptography, automatic error correction, linguistics, etc.
Every effort has been made to keep proofs simple and sometimes a heuristic approach has been adopted when an over-emphasis on rigour might obscure the issues. The book begins with a short description of the relevant notions from the theory of probability. Then the quantity of information in a message is defined and its properties developed, one significant conclusion being that data processing cannot increase the amount of information in data though it may make the digestion more palatable. From this foundation the question of coding messages and making the coding optimal is considered.
Once coded messages are available they can be transmitted and the channel capacity needs to be defined. Methods of calculating the capacity are discussed and the consequences of the definition for error-free transmission are examined. In practice, errors nearly always occur and so the possibility of reducing their effect by block code and error-correcting codes is investigated.
Finally, the theory is generalized so as to be applicable to continuous signals such as are encountered in radio communication, for example.
Exercises are provided so that the reader may check whether the theory has been satisfactorily grasped.