Comma code
From Wikipedia, the free encyclopedia
A comma code is a type of prefix-free code in which a comma, a particular symbol or sequence of symbols, occurs at the end of a code word and never occurs otherwise.[1] This is an intuitive way to express arrays.
For example, Fibonacci coding is a comma code in which the comma is 11. 11 and 1011 are valid Fibonacci code words, but 101, 0111, and 11011 are not.
Examples
- Unary coding, in which the comma is
0. This allows NULL values (when the code and comma is a single0, the value can be taken as a NULL or a 0). - Fibonacci coding, in which the comma is
11. The comma of 11 implies that the two codes that are being used to depict data are 0,10. Which can translate to the exact bits 0,1 when representing arbitrary bit strings or numbers. If representing arbitrary bit strings or numbers using this method, one would simply write a '0' for a 0 and a '10' for a 1, and a '11' for the comma / separator and repeat the separator/comma for a NULL. That constructs a Fibonacci lookalike code which looks like a Fibonacci code but translates directly to the bit string rather than the number represented by the Fibonacci series. With standard Fibonacci coding, one is representing every integer as a Fibonnaci code, and the mapping of integer->code->integer encoding and decoding require Fibonacci analysis. With the Fibonacci lookalike codes, one takes the bitstring or a bitwise number and writes it as a series of 0s and 10s and ends the string/number with a '11'. This allows one to express arrays.
| Symbol | Reverse Binary representation | Fibonacci code word | Fibonacci lookalike code | Punctured Elias Code |
|---|---|---|---|---|
| 1 | 1 | 11 | 11 | 1 1 |
| 2 | 01 | 011 | 0 11 | 1 01 |
| 3 | 11 | 0011 | 10 11 | 01 11 |
| 4 | 001 | 1011 | 0 0 11 | 1 001 |
| 5 | 101 | 00011 | 10 0 11 | 01 101 |
| 6 | 011 | 10011 | 0 10 11 | 01 011 |
| 7 | 111 | 01011 | 10 10 11 | 001 111 |
| 8 | 0001 | 000011 | 0 0 0 11 | 1 0001 |
| 9 | 1001 | 100011 | 10 0 0 11 | 01 1001 |
| 10 | 0101 | 010011 | 0 10 0 11 | 01 0101 |
| 11 | 1101 | 001011 | 10 10 0 11 | 001 1101 |
| 12 | 0011 | 101011 | 0 0 10 11 | 01 0011 |
| 13 | 1011 | 0000011 | 10 0 10 11 | 001 1011 |
| 14 | 0111 | 1000011 | 0 10 10 11 | 001 0111 |
The Fibonacci code can be deconstructed into the data part and the pre-comma ( not the 11 but the count of 1s in the data ). This is a punctured Elias code, which simply writes the count of 1s in the digits to follow. Or one can construct the Fibonacci code from the punctured Elias code by writing a 10 for every 1 in the data and a 11 for the last 1 in the data string. If the data is a random bitstring, then one can simply write 0 for 0 in the bitstring and 10 for 1 in the bitstring and a 11 as the comma/separator. This allows a NULL value which is simply 11.
This method allows a bitstring or number of length n to be expressed in 1.5n+2 bits assuming 0s and 1s are present in equal measure in the data.
- All Huffman codes can be converted to comma codes by prepending a
1to the entire code and using a single0as a code and the comma.
| Symbol | Code | Comma Code |
|---|---|---|
| Comma | - (N/A) | 0 |
| 0 | 00 | 100 |
| 1 | 01 | 101 |
| 2 | 10 | 110 |
| 3 | 11 | 111 |
The definition of word being a number of symbols ending in a comma, the equivalent of a space character.
- 50% commas in all data axiom – All implied data specifically variable length bijective data can be shown to be consisting of exactly 50% of commas.
All scrambled data or suitably curated same-length data exhibits so called implied probability.
Such data that can be termed 'generic data' can be analysed using any interleaving unary code as headers where additional bijective bits (equal to the length of the unary code just read) are read as data while the unary code serves as an introduction or header for the data. This header serves as a comma. The data can be read in an interleaving fashion between each bit of the header or in post read fashion when the data is only read after the entire unary header code is read like Chen-Ho encoding.
It can be seen by random walk techniques and by statistical summation that all generic data has a header or comma of an average of 2 bits and data of an additional 2 bits (minimum 1).
This also allows for an inexpensive base increase algorithm before transmission in non binary communication channels, like base-3 or base-5 communication channels.
| n | RL code | Next code | Bijective data (non-NULL) | Commas |
|---|---|---|---|---|
| 1 | 1? |
0? |
? (1=1,2=2) | , |
| 2 | 1?1? |
0?0? |
?? (3,4,5,6=11,12,21,22) | ,, |
| 3 | 1?1?1? |
0?0?0? |
??? | ,,, |
| 4 | 1?1?1?1? |
0?0?0?0? |
???? | ,,,, |
| 5 | 1?1?1?1?1? |
0?0?0?0?0? |
????? | ,,,,, |
| 6 | 1?1?1?1?1?1? |
0?0?0?0?0?0? |
?????? | ,,,,,, |
| 7 | 1?1?1?1?1?1?1? |
0?0?0?0?0?0?0? |
??????? | ,,,,,,, |
| 8 | 1?1?1?1?1?1?1?1? |
0?0?0?0?0?0?0?0? |
???????? | ,,,,,,,, |
| 9 | 1?1?1?1?1?1?1?1?1? |
0?0?0?0?0?0?0?0?0? |
????????? | ,,,,,,,,, |
| 10 | 1?1?1?1?1?1?1?1?1?1? |
0?0?0?0?0?0?0?0?0?0? |
?????????? | ,,,,,,,,,, |
| ... | ||||
Where '?' is '1' or '2' for the value of the bijective digit that requires no further processing.
Of course we use a single comma to separate each field of data, therefore showing that all the data consists of 50% of commas. This is quite visible from an implied probability of 50% for the 0 code in Huffman base 3 codes: 0,10,11 (net 2/3 or 66.66% commas) or the base-5 comma code shown above. The cost-per-character quotient of higher base communication has to maintain near logarithmic values for the data and less than 2-bits for the comma character to maintain cost effectiveness.
This method has an assurance of a '1' or '2' after every '0' (comma) and this property can be useful when designing around timing concerns in transmission. It can be somewhat expensive to convert a known binary value to ternary unless ternary bit costs are reduced to similar to binary bit costs, so this bit can be multiplexed in a separate binary channel if costs agree (this may require a read of an additional 'tail'/trailing portion of 2-bits pure data for the binary channel (from after the first bit of the first change as this is not an instantly-decodable code, simply read if using an instantly decodable unary code) to be similar to the 2 average ternary bits remaining on the primary channel equivalent to bits before cost comparisons are factored in).
Not considering multiplexing, this method has a read efficiency of 3 ternary digits for a read of 4 binary bits or 1.33 bits. Or
This method allows a bitstring or number of length n to be expressed in 2n bits assuming 0s and 1s are present in equal measure in the data.
- 66.66% (2/3) commas in all data axiom - All implied data specifically variable length data can be shown to be consisting of exactly 66.66% (2/3) of commas.
| n | RL code | Next code | Bijective data (has NULL) | Commas |
|---|---|---|---|---|
| 1 | 1 | 0 | NULL (or 0) | , |
| 2 | 1?1 |
0?0 |
? (1=1,2=2) | ,, |
| 3 | 1?1?1 |
0?0?0 |
?? (3,4,5,6=11,12,21,22) | ,,, |
| 4 | 1?1?1?1 |
0?0?0?0 |
??? | ,,,, |
| 5 | 1?1?1?1?1 |
0?0?0?0?0 |
???? | ,,,,, |
| 6 | 1?1?1?1?1?1 |
0?0?0?0?0?0 |
????? | ,,,,,, |
| 7 | 1?1?1?1?1?1?1 |
0?0?0?0?0?0?0 |
?????? | ,,,,,,, |
| 8 | 1?1?1?1?1?1?1?1 |
0?0?0?0?0?0?0?0 |
??????? | ,,,,,,,, |
| 9 | 1?1?1?1?1?1?1?1?1 |
0?0?0?0?0?0?0?0?0 |
???????? | ,,,,,,,,, |
| 10 | 1?1?1?1?1?1?1?1?1?1 |
0?0?0?0?0?0?0?0?0?0 |
????????? | ,,,,,,,,,, |
| ... | ||||
Where '?' is '1' or '2' for the value of the bijective digit that requires no further processing. This method results in statistical similarity to a simple 'implied read' of Huffman base 3 codes: 0,10,11 (net 2/3 or 66.66% commas).
It can be seen by random walk techniques and by statistical summation that all generic data has a header or comma of an average of 2 bits and data of an additional 1 bit (minimum 0).
This has no assurance of a '1' or '2' after every '0' (comma) a property that can be useful when designing around timing concerns in transmission.
This method has a read efficiency of 2 ternary digits for a read of 3 binary bits or 1.5 binary bits/ternary digit. Or
This method allows a bitstring or number of length n to be expressed in 2n+1 bits assuming 0s and 1s are present in equal measure in the data. One can assume that the value 0 is 0 bits or an empty string "" followed by the 1.
- 34.375% | 31.25% (~ 1/3) write commas for efficiency gains using number partitioning – Implied reads and writes using number partitioning techniques ('m' numbers divided into 'n' partitions result in n^m permutations) similar to Chen-Ho and Hertz encoding show greater efficiency of both reads and writes similar to nearly random distribution. Thus the use of codes makes less sense and the use of higher bases becomes more important. Similarly, a 'write' comma becomes any number in the base, a 'read' comma is the header shown below, Huffman base 4 codes:
0,10,110,111.
The main advantage to this technique apart from higher efficiency is that there is no base conversion required which would require the entire stream to be read first and then converted. The disadvantage is that the average number length becomes higher and similar to random number generation and timing concerns that govern ternary transmission come to the fore. With m=2 and n=2, we get, not forgetting that a value of '(2)' is essentially 0-bits:
| Binary encoding | Ternary digits | ||||||||
|---|---|---|---|---|---|---|---|---|---|
| READS - Code space (128 states) | b3 | b2 | b1 | b0 | Values encoded | Description | WRITES - Occurrences (100 states) | ||
| 50% (64 states) | 0 | a | b | (0–1) (0–1) | Two lower digits | 44.44% (45 states) | |||
| 25% (32 states) | 1 | 0 | a | (2) (0–1) | One lower digit,
one higher digit |
22.22% (22 states) | |||
| 12.5% (16 states) | 1 | 1 | 0 | b | (0–1) (2) | 22.22% (22 states) | |||
| 12.5% (16 states) | 1 | 1 | 1 | (2) (2) | Two higher digits | 11.11% (11 states) | |||
This method therefore has a read efficiency of 2 ternary digits for a read of binary bits or 1.5625 binary bits/ternary digit. Or .
A write efficiency of 2 ternary digits for a write of bits or 1.61 binary bits/ternary digit, or
- Cardinal numbers for efficient base conversion – Since it has been ascertained that comma codes are very similar to base conversion, the only concern being efficiency and timing, the direct conversion/mapping of 19 binary bits numbers to 12 ternary trits numbers allow for an efficiency of or efficiency depending upon the method of calculation. This works because and ≃ . This of course is more of a theoretical construct and has no mention about timing when trying to apply this to ternary transmission methods. It does however leave codes to design around timing concerns.