k=ij[k][ip[(c+2) % 10][7 & m++]]; } for (j=0;j | Huffman Coding and Compression of Data 903 k ij k ip c 2 10 7 m for j 0 j 9 j Find which appended digit will check properly if ij k ip j m 7 break ch j 48 Convert to ASCII. return k 0 CITED REFERENCES AND FURTHER READING McNamara . 1982 Technical AspectsofData Communication 2nd ed. Bedford MA Digital Press . 1 da Cruz F. 1987 Kermit A File Transfer Protocol Bedford MA Digital Press . 2 Morse G. 1986 Byte vol. 11 pp. 115-124 September . 3 LeVan J. 1987 Byte vol. 12 pp. 339-341 November . 4 Sarwate . 1988 Communications of the ACM vol. 31 pp. 1008-1013. 5 Griffiths G. and Stones . 1987 Communications oftheACM vol. 30 pp. 617-620. 6 Wagner . and Putter . 1989 Communications oftheACM vol. 32 pp. 106-110. 7 Huffman Coding and Compression of Data A lossless data compression algorithm takes a string of symbols typically ASCII characters or bytes and translates it reversibly into another string one that is on the average of shorter length. The words on the average are crucial it is obvious that no reversible algorithm can make all strings shorter there just aren t enough short strings to be in one-to-one correspondence with longer strings. Compression algorithms are possible only when on the input side some strings or some input symbols are more common than others. These can then be encoded in fewer bits than rarer input strings or symbols giving a net average gain. There exist many quite different compression techniques corresponding to different ways of detecting and using departures from equiprobability in input strings. In this section and the next we shall consider only variable length codes with defined word inputs. In these the input is sliced into fixed units for example ASCII characters while the corresponding output comes in chunks of variable size. The simplest such method is Huffman coding 1 discussed in this section. Another example arithmetic compression is discussed in . At the opposite extreme from defined-word variable length .