Notices
Results 1 to 20 of 20

Thread: Data correction methods resistant to pessimistic cases

  1. #1 Data correction methods resistant to pessimistic cases 
    Forum Senior
    Join Date
    Jul 2008
    Posts
    316
    Standard data correction methods takes some blocks of the data and enlarge it to protect against some specific set of errors. We loose whole block if we get out of this set.
    For example Hamming (7,4) uses 3 additional checksum bits for 4 bits of information - it works fine when there is at most 1 error per block of 7 bits.
    We use quite large redundancy, but is it safe now? The problem is with pessimistic cases - if expected error rate is 1/100 bits, still quite often it can happen that we have 2 errors in some of 7 bit blocks.

    To be able to correct errors we are adding usually constant density of redundancy.
    But the density of errors usually isn't constant - can fluctuate - sometimes is above average, sometimes beyond.
    When is above - we need a lot of redundancy to cope with these pessimistic cases.
    When is beyond - we've used there more redundancy than it was really needed - we waste some capacity.

    The idea is to somehow transfer these unnecessary surpluses of redundancy to help with difficult cases.
    Thanks of it instead of adding redundancy according to pessimistic error density, we could use for a bit above average density, which is usually a few orders of magnitude smaller.
    To do this we cannot place data in independent blocks (like 7 bits in Hamming) - their redundancy is also independent.
    We should use a coding which allow to hide redundancy so that each piece of it affects large part of the message.

    I'll show now an example how it can be done.
    Unfortunately it has various latency - simple errors can be quickly corrected, but more complicated may take long times... maybe there are better ways?

    The idea is to use a very precise coding - such that any error would make that the following decoded sequence should be completely random bit sequence.
    For example a block ciphers, which uses previous block to calculate the following one ... but there is much better coding for it I will say later about.

    Now - add to the information some easily recognizable redundancy - for example insert '1' between each digit (each such bit is hidden in all succeeding bits of encoded message).
    If while decoding it occurs that there is '0' in one of these places - that means we had some error before and most probably it's nearby.
    Knowing statistical characteristics of expected errors, we can make list of most possible errors in such cases, sort them by their probability - on the top of this list there should be 'switched previous bit', ... after a while there can appear 'switched two bits:...'. This list can be very large.
    Now if we know that there (nearby) appeared some error, we take this list position by position, correct as it was really this case (switch some bits) and try to decode further fixed number of bits (a few dozens).
    If everything is ok - we get only '1' on selected positions - we can assume that it was this error.
    If not - try the next one from the list.
    This list can be generated online - using large amount of time we could repair even badly damaged transmission.
    While creating the list, we have to remember that errors can appear also in the succeeding bits.

    Using block ciphers is a bit nasty - they are slow, we have large blocks to search for errors ...
    There is new coding just ideal for above purpose - Asymmetric Numeral Systems (ANS) - new entropy coder, which has very nice properties for cryptography ... and data correction - it's much faster than block ciphers and uses small blocks of various length.
    Here for example is demonstration about it:
    http://demonstrations.wolfram.com/Da...umeralSystems/

    What do You think about it?


    Reply With Quote  
     

  2.  
     

  3. #2 Uniform redundancy distribution 
    Forum Senior
    Join Date
    Jul 2008
    Posts
    316
    We can use ANS entropy coding property to make above process quicker and distribute redundancy really uniformly:
    to create easily recognizable pattern, instead of inserting '1' symbol regularly, we can add a new symbol - the forbidden one.
    If it occurs, we know that something was wrong, the nearer the more probable.

    Let say we use symbols with some probability distribution (p_i), so we at average need H = -sum_i p_i lg p_i bits/symbol.
    For example if we want just to encode bytes without compression, we can threat it as 256 symbols with p_i=1/256 (H = 8 bits/symbol).

    Our new symbol will have some chosen probability q. The nearer to 1 it is, the larger redundancy density we add, the easier to correct errors.
    We have to rescale the rest of probabilities: p_i ->(1-q) p_i.
    In this way, the size of the file will increase r = (H - lg (1-q) )/H times.

    Now if while decoding we get the forbidden symbol, we know that,
    - with probability q, the first uncorrected yet error has occurred in some of bits used to decode last symbol,
    - with probability (1-q)q it occurred in bits used while decoding the previous symbol,
    - with probability (1-q)^2 q ...

    The probability of succeeding cases drops exponentially, especially if (1-q) is near 0.
    But the number of required tries also grows exponentially.
    But observe that for example all possible distributions of 5 errors in 50 bits is only about 2 millions - it should be checked in a moment.

    Let's compare it to two well known data correction methods: Hamming 4+3 (to store 4 bits we use additional 3 bits) and tripling each bit (1+2).
    Taking the simplest error distribution model - for each bit the probability that it is switched is constant, let say e=1/100.
    So the probability that in 7 bit block we have at least 2 errors, is
    1 - (1-e)^7 - 7e(1-e)^6 =~ 0.2%
    For 3 bit block it's about 0.03%
    So for each kilobyte of data we irreversibly loose: 4*4=16 bits in Hamming 4+3, 2.4 bits for tripling bits.
    We see that even for looking to be well protected methods, we loose a lot of
    data because of pessimistic cases.

    For ANS based data correction:
    4+3 case (r=7/4) - we add forbidden symbol with probability q=1-1/2^3=7/8, and each of 2^4=16 symbols has probability 1/16*1/8=1/128.
    In practice ANS works best if lg(p_i) aren't natural numbers, so q should (not necessary) be not exactly 7/8 but something around.
    Now if the forbidden symbol occurs, with probability about 7/8 we only have to try to switch one of (about) 7 bits used to decode this symbol.
    With 8 times smaller probability we have to switch 7 bits from the previous one... with much smaller probability, depending on the error density model, we should try to switch some two bits ... and even extremely pessimistic cases looks to take reasonable time to correct them.
    For 1+2 case (r=3), the forbidden symbol has about 3/4, and 0,1 has 1/8 each.
    With probability 3/4 we have only to correct one of 3 bits ... 255/256 one of 12 bits ...

    -----

    There is some problem - in practice coding/decoding tables should fit into cache, so we can use at most about million of states.
    While correcting trying thousands of combination, we could accidentally get the correct state with wrong correction - a few bits would be corrected in wrong way and we wouldn't even notice it.
    To prevent it we can for example use two similar stages of ANS - the first creates bytes and the second convert the first to the final sequence.
    The second would get uniformly distributed bytes, but ANS itself creates some small perturbations and it will work fine.
    Thanks of this the number of states grows to the square of initial one, reducing this probability a few orders of magnitude at the cost of double time requirements.
    We could use some checksum to confirm it ultimately.


    Reply With Quote  
     

  4. #3  
    Forum Senior
    Join Date
    Jul 2008
    Posts
    316
    I've just realized that we can use huge freedom of choice for the functions for ANS to improve latency - we can make that if the forbidden symbol occurs, we are sure that if there was only single error, it was among bits used to decode this symbol.
    Maybe we will have to go back to the previous ones, but only if there were at least 2 errors among these bits - it's an order of magnitude less probable than previously.
    The other advantage is that if we would try to verify wrong correction by trying to decode further, single error in block will automatically tell us that it's wrong correction. There could be 2 errors, but they are much less probable, we can check it much later.

    The trick is that the forbidden symbol usually dominate in the coding tables, so we can make that if for given transferred bits we would get allowed symbol, for each sequence differ on one bit (Hamming distance 1) we would get the forbidden symbol.

    So to make the initialization, we choose some amounts of the allowed symbols and we have to place them somehow.
    For example: take unplaced symbol, place it in random unused position (using list of unused positions), and place the forbidden symbol on each state differing on one bit of 'some' last ones.
    This 'some' is a bit tricky - it has to work assuming that previously only allowed symbols were decoded, but it could be any of them.
    If we are not making compression - all of them are equally probable, this 'some' is -lg(p_i) plus minus 1. Plus for high states, minus for low.

    There should remain some states unused after this procedure. We can fill them with forbidden symbols or continue above procedure, inserting more allowed symbols.
    This random initialization still leaves huge freedom of choice - we can still use it to additionally encrypt the data, using random generator initialized with the key.
    If want data correction only, we can use that in this procedure many forbidden symbols are marked a few times, the more the smaller the output file ... with a bit smaller but comparable safeness.
    So we could consciously choose some good schemes, maybe even that uses Hamming distance 2 (or grater) - to go back to previous symbol there would have to occur 3 errors.

    For example 4+3 scheme seems to be perfect: we transfer at average 7 bits, and for every allowed symbol there occurs 7 forbidden ones.
    For some high states like 111******** (stars are the transferred bits) we have to place 8 forbidden symbols, but for low like 10000****** we can place only six.
    Some of forbidden states will be marked a few time, so we should make whole procedure, eventually use a bit less amount allowed symbols (or more).
    Reply With Quote  
     

  5. #4  
    Forum Senior
    Join Date
    Jul 2008
    Posts
    316
    I've just realized that Hamming, tripling bits are some special (degenerated) cases of ANS based data correction
    In the previous post I gave arguments that it would be beneficial if any two allowed states would have Hamming distance at least 2.
    If we would make that this distance is at least 3, we could unambiguously instantly correct single error as in Hamming.

    To get tripling bit from ANS we use:
    states from 1000 to 1111
    Symbol '0' is in 1000 state, symbol '1' is in 1111 (Hamming distance is 3) and the rest six states have the forbidden symbol.
    We have only 1 appearance of each allowed symbol, so after decoding it, before bit transfer the number of state will always drop to '1' and three youngest bits will be transferred from input.

    To get Hamming 4+3,
    states are from 10000000 to 11111111
    We have 16 allowed symbols from '0000' to '1111', each one has exactly one appearance - the state 1*******, where stars are 7 bits it would be coded in Hamming - two different has Hamming distance at least 3.
    After coding the state drops to '1' again and this '1' will be the oldest bit after bit transfer.

    The fact that each allowed symbol has only one appearance, makes that after decoding we each time drops to '1' - it's kind of degenerated case - all blocks are independent, we don't transfer any redundancy.
    It can handle with great error density, like 1/7 (for Hamming 4+3) ... but only while in each block is at most 1 error.
    In practice errors doesn't come in such regularity and even with much smaller error density, Hamming looses a lot of data (like 16 bits per kilobyte for 0.01 error probability).

    Let's think about theoretical limit of bits of redundancy we have to add for bit of information for assumed statistical error distribution to be able to full correct the file.
    To find this threshold, let's think about simpler looking question: how many information is stored in such uncertain bit?
    Let's take the simplest error distribution model - for each bit probability that it's switched is equal e (near zero), so if we see '1' we know that with probability 1-e it's really '1', and with probability e it's 0.
    So if we would know which of this cases we have, what is worth
    h(e)=-e lg(e) - (1-e) lg(1-e)
    we would have whole bit.
    So such uncertain bit is worth 1-h(e) bits.
    So to transfer n real bits, we have to use at least n/(1-h(e)) these uncertain bits - the theoretical limit to be able to read a message is (asymptotically)
    h(e)/(1-h(e)) additional bits of redundancy /bit of information.

    So a perfect data correction coder for e=1/100 error probability, would need only additional 0.088 bits/bit to be able to restore message.
    Hamming 4+3 in spite of using additional 0.75 bits/bit, looses 16bits/kilobyte with the same error distribution.

    Hamming assumes that every 7bit block can come in 8 ways - correct or with changed one of 7 bits.
    It uses the same amount of information to encode each of them, so it would add at least lg(8 )=3 bits of redundancy in each block - we see it's done optimally...
    ... but only if the probability of all of this 8 ways would be equal for this error distribution...
    In practice the most probably we would have the possibility without error, later with one error ... and with much smaller possibilities with more errors ... depending how does error distribution in our medium looks like.

    To go into the direction of the perfect error correction coder, we have to break with uniform distribution of cases like in Hamming and try to correspond to real error distribution probabilities.

    If the intermediate state for ANS based data correction could have many values, we would transfer some redundancy - the 'blocks' would be somehow connected and if in one of them would occur more errors, we could use this connection to see that something is wrong and use some unused redundancy from succeeding blocks to correct it - we use the assumption that according to error distribution, the succeeding blocks are with large probability correct.
    We have huge freedom while choosing ANS parameters to get closer to the assumed probability model of error distribution ... to the perfect data correction coder.
    Intuitively (?) using ANS with two allowed symbols and properly chosen functions we could come close to this limit for uniform error distribution (-lg(1-q)>h(e)/(1-h(e)) ), but it would require large number of states ... and enormous amount of computations...
    Reply With Quote  
     

  6. #5  
    Forum Senior
    Join Date
    Jul 2008
    Posts
    316
    I've just finished large paper about ANS. There is added some deeper analysis, gathered rethinked information I've placed on different forums...

    There is also shown that presented data correction approach can really allow to reach theoretical Shannon's limit and looks to have expected linear time, so should be much better than the only used such practical method - Low density Parity Check (LDPC)
    http://arxiv.org/abs/0902.0271
    Reply With Quote  
     

  7. #6 Re: Data correction methods resistant to pessimistic cases 
    Suspended
    Join Date
    Apr 2008
    Posts
    2,176
    Quote Originally Posted by Jarek Duda
    Standard data correction methods takes some blocks of the data and enlarge it to protect against some specific set of errors. We loose whole block if we get out of this set.
    For example Hamming (7,4) uses 3 additional checksum bits for 4 bits of information - it works fine when there is at most 1 error per block of 7 bits.
    We use quite large redundancy, but is it safe now? The problem is with pessimistic cases - if expected error rate is 1/100 bits, still quite often it can happen that we have 2 errors in some of 7 bit blocks.

    To be able to correct errors we are adding usually constant density of redundancy.
    But the density of errors usually isn't constant - can fluctuate - sometimes is above average, sometimes beyond.
    When is above - we need a lot of redundancy to cope with these pessimistic cases.
    When is beyond - we've used there more redundancy than it was really needed - we waste some capacity.

    The idea is to somehow transfer these unnecessary surpluses of redundancy to help with difficult cases.
    Thanks of it instead of adding redundancy according to pessimistic error density, we could use for a bit above average density, which is usually a few orders of magnitude smaller.
    To do this we cannot place data in independent blocks (like 7 bits in Hamming) - their redundancy is also independent.
    We should use a coding which allow to hide redundancy so that each piece of it affects large part of the message.

    I'll show now an example how it can be done.
    Unfortunately it has various latency - simple errors can be quickly corrected, but more complicated may take long times... maybe there are better ways?

    The idea is to use a very precise coding - such that any error would make that the following decoded sequence should be completely random bit sequence.
    For example a block ciphers, which uses previous block to calculate the following one ... but there is much better coding for it I will say later about.

    Now - add to the information some easily recognizable redundancy - for example insert '1' between each digit (each such bit is hidden in all succeeding bits of encoded message).
    If while decoding it occurs that there is '0' in one of these places - that means we had some error before and most probably it's nearby.
    Knowing statistical characteristics of expected errors, we can make list of most possible errors in such cases, sort them by their probability - on the top of this list there should be 'switched previous bit', ... after a while there can appear 'switched two bits:...'. This list can be very large.
    Now if we know that there (nearby) appeared some error, we take this list position by position, correct as it was really this case (switch some bits) and try to decode further fixed number of bits (a few dozens).
    If everything is ok - we get only '1' on selected positions - we can assume that it was this error.
    If not - try the next one from the list.
    This list can be generated online - using large amount of time we could repair even badly damaged transmission.
    While creating the list, we have to remember that errors can appear also in the succeeding bits.

    Using block ciphers is a bit nasty - they are slow, we have large blocks to search for errors ...
    There is new coding just ideal for above purpose - Asymmetric Numeral Systems (ANS) - new entropy coder, which has very nice properties for cryptography ... and data correction - it's much faster than block ciphers and uses small blocks of various length.
    Here for example is demonstration about it:
    http://demonstrations.wolfram.com/Da...umeralSystems/

    What do You think about it?
    You say they project or enlarge it.

    I believe what they used to do, was take fairly large amounts of information. Checksum the entire lot. The chances of getting a successful checksum was, astronomical. Billions or trillions to one. On a fairly large amount of information.

    Being put through so many different kinds of mathematical operations, increases the chances of success of finding an error.

    If that whole block was bad, you just resent the whole block. If it happened again, you had a communication problem. Rebooted and tried it again.

    I believe they use the position of the small block of information going into the buffer as well as its value. So there is a larger difference created. Much like effecting an asteroid while it is far away. It causes an exponential difference in the outcome based upon the original position and value. Verses something else, or anything else.


    But I am not an expert.


    The idea is to get the failure of two codes one correct and one in error matching each other way down. They have done that. Then considering the low amount of errors. The chances that you will get an error, and get an exact match after going through all these checks, is very high.

    Some schemes do a checksum before they send it. They send that checksum value at the end of the data. So the chances of matching that, are really out of this world. And since the last bits get thrown away, it helps the accuracy even more. Because that is probably where a failure is going to occur at the end of a communication.


    When you see checksum error, the checksum algorithm is working perfectly.




    Sincerely,


    William McCormick
    Reply With Quote  
     

  8. #7  
    Forum Senior
    Join Date
    Jul 2008
    Posts
    316
    There is no magic in data correction ... we just choose some codewords (possible sequences) and try to find the closest one to what we've received.
    It looks simple, but it occurs that it's extremely difficult to make it practical - such that the correction algorithm doesn't have to check exponentially large set of possible corrections and in the paper is very strong argument that presented approach finally does it.

    The only known 'practical' approach - LDPC as You've said distributes uniformly huge amount of checksum bits - it takes a long time (N^2 or more) to distribute them and solving NP problem to find the proper correction and still we are not reaching theoretical limit.
    Presented method works in completely different way. It encodes very quickly in clearly linear time. It can be imagined as path tracking - we know starting and ending position and we want to walk between them using the proper path. When we use this path everything is fine, but when we lost it, in each step we have selected probability to became conscious of this fact. Now we can go back and try to make some correction. We pay for this parameter with capacity, but if this probability is chosen higher than some threshold corresponding exactly to Shannon's limit, the number of corrections we should try doesn't longer grow exponentially, but polynomially and so we can easily verify if it was the proper correction ... checking all possibilities in pessimistically polynomial time.

    Sincerely,
    Jarek
    Reply With Quote  
     

  9. #8  
    Suspended
    Join Date
    Apr 2008
    Posts
    2,176
    Quote Originally Posted by Jarek Duda
    There is no magic in data correction ... we just choose some codewords (possible sequences) and try to find the closest one to what we've received.
    It looks simple, but it occurs that it's extremely difficult to make it practical - such that the correction algorithm doesn't have to check exponentially large set of possible corrections and in the paper is very strong argument that presented approach finally does it.

    The only known 'practical' approach - LDPC as You've said distributes uniformly huge amount of checksum bits - it takes a long time (N^2 or more) to distribute them and solving NP problem to find the proper correction and still we are not reaching theoretical limit.
    Presented method works in completely different way. It encodes very quickly in clearly linear time. It can be imagined as path tracking - we know starting and ending position and we want to walk between them using the proper path. When we use this path everything is fine, but when we lost it, in each step we have selected probability to became conscious of this fact. Now we can go back and try to make some correction. We pay for this parameter with capacity, but if this probability is chosen higher than some threshold corresponding exactly to Shannon's limit, the number of corrections we should try doesn't longer grow exponentially, but polynomially and so we can easily verify if it was the proper correction ... checking all possibilities in pessimistically polynomial time.

    Sincerely,
    Jarek
    I believe they did it by looking at the information, after they put it through their mixer, error checker or algorithm. And by just looking at how often certain sized clumps of information, duplicate each other, through the mixer. Even though they are not the same data. They were able to maximize the size of the cluster to maintain accuracy.

    They probably keep it well out of the range we consider safe for risking our lives.

    Years ago some experts called CRC, carriage return count. And although that may be the key part of the algorithm, I believe there is more to it then that.

    CRC (C`R-C’) noun
    Acronym for cyclical (or cyclic) redundancy check. A procedure used in checking for errors in data transmission. CRC error checking uses a complex calculation to generate a number based on the data transmitted. The sending device performs the calculation before transmission and sends its result to the receiving device. The receiving device repeats the same calculation after transmission. If both devices obtain the same result, it is assumed that the transmission was error-free. The procedure is known as a redundancy check because each transmission includes not only data but extra (redundant) error-checking values. Communications protocols such as XMODEM and Kermit use cyclical redundancy checking.

    Microsoft Press® Computer and Internet Dictionary © & ? 1997, 1998 Microsoft Corporation. All rights reserved. Portions, The Microsoft Press® Computer Dictionary, 3rd Edition, Copyright © 1997 by Microsoft Press. All rights reserved.

    I love the idea of better faster stronger. I have not seen many errors so far from the system. That should be priority.

    So if you can match the security, and make it much faster. I cannot see why it would not work.

    Sincerely,


    William McCormick
    Reply With Quote  
     

  10. #9  
    Forum Senior
    Join Date
    Jul 2008
    Posts
    316
    Checksum/hash value allows to tell if the file is correct or not, it rather doesn't allow to repair errors which occurred. Using huge checksums (hash values), we could make that from all typical noises for given channel, almost certainly only the proper correction will pass - we can prove that Shannon's limit is achievable this way (see paper), but it's completely impractical correction method.
    Presented approach spreads this this check sum/hash value uniformly over the whole message, so that we will see that something is wrong not only on the end of decoding, but mainly while processing the file, just after damages occur - making correction practical.
    Reply With Quote  
     

  11. #10  
    Suspended
    Join Date
    Apr 2008
    Posts
    2,176
    Quote Originally Posted by Jarek Duda
    Checksum/hash value allows to tell if the file is correct or not, it rather doesn't allow to repair errors which occurred. Using huge checksums (hash values), we could make that from all typical noises for given channel, almost certainly only the proper correction will pass - we can prove that Shannon's limit is achievable this way (see paper), but it's completely impractical correction method.
    Presented approach spreads this this check sum/hash value uniformly over the whole message, so that we will see that something is wrong not only on the end of decoding, but mainly while processing the file, just after damages occur - making correction practical.
    I do not see why you would not discard the whole block containing an error? Small or large.

    Re-send it and check it again?

    Once you access register values out of some natural order of filling them.

    You risk a second error. At least in my mind, with my limited understanding of it.

    In other words if some range is filled with correct and incorrect data. It would seem more dangerous to over write, just a portion of the data.
    Considering its position will be based on input, that might be flawed or causing the original error. In other words the program that scans is going to be infected with the code that has an error.
    That error could possibly send the checking device off track. Rather then just seeing an error, and following a controlled procedure for purging the bad data. And resuming from the last good portion of data.

    At least if some, hammer and anvil, fill check, fill check, monotony is used, the computer should crash, after it tries to write and check a block that continues to flaw. Which is what you want. At least in my mind.

    I would not like the computer to dissect the unknown, or play with it. Or introduce it to a program. Better to just purge it and try again. Then crash the machine before it damages anything, if it continues to error.

    It should probably check the bios loop, or what ever is controlling, the processor, or looping the processor. And also check any formatting of memory involved. Before starting again. If any of the above are bad or do not check out just crash it.






    Sincerely,


    William McCormick
    Reply With Quote  
     

  12. #11  
    Forum Senior
    Join Date
    Jul 2008
    Posts
    316
    Discarding damaged block and sending it once more is not always an option - for example while transmitting data through radio, rates are now so large that it's unavoidable that a few promiles of bits are damaged. Modern electronics also require such mechanism for example because of cosmic rays... or imagine sending once more data from a deep space mission...

    Especially that we can protect against some small amount of errors (noise). Please look at Hamming codes ( http://en.wikipedia.org/wiki/Hamming_code) - for example encoding each 4 bits in 7 bit blocks, if there is at most one bit damaged in such block, we can repair this error. These codes are faaaaar from being perfect. Theoretical Shannon's limit is for example when for each bit probability that it's damaged is 1/100 we need to increase the file asymptotically 1.088 times - for each bit add at average 0.088 bits of redundancy and we should be able to fully repair damaged file (Hamming added 0.75 and still completely looses about 16bits/kilobyte). In practice it's extremely difficult to get near this limit, but there are strong arguments that presented approach can finally do it.
    Reply With Quote  
     

  13. #12  
    Suspended
    Join Date
    Apr 2008
    Posts
    2,176
    Quote Originally Posted by Jarek Duda
    Discarding damaged block and sending it once more is not always an option - for example while transmitting data through radio, rates are now so large that it's unavoidable that a few promiles of bits are damaged. Modern electronics also require such mechanism for example because of cosmic rays... or imagine sending once more data from a deep space mission...

    Especially that we can protect against some small amount of errors (noise). Please look at Hamming codes ( http://en.wikipedia.org/wiki/Hamming_code) - for example encoding each 4 bits in 7 bit blocks, if there is at most one bit damaged in such block, we can repair this error. These codes are faaaaar from being perfect. Theoretical Shannon's limit is for example when for each bit probability that it's damaged is 1/100 we need to increase the file asymptotically 1.088 times - for each bit add at average 0.088 bits of redundancy and we should be able to fully repair damaged file (Hamming added 0.75 and still completely looses about 16bits/kilobyte). In practice it's extremely difficult to get near this limit, but there are strong arguments that presented approach can finally do it.
    I understand what you are saying. About sending an additional code along with the original code. That can replace a missing bit or a bit that is in error.

    High end transmissions use a triple transmission. Like in high tech models. Two out of three matching transmissions are used as the transmission.
    You could do 4/12 and the chances of getting a bad packet would be low. There would be literally no need to re-send.
    In poor reception areas you would just get the damaged package. Some program on the receiving end, could reassemble from all three partials a complete packet. In my opinion, in most cases. And if not, you did not get enough communication anyway.

    The communication systems that feed the transmitter probably uses this technology. I understand that everyone wants to cheapen things up. But why?

    Better to do it once right. Or get it through more often, then trying to save a few bits. In my opinion.

    The overhead for a 4/7 is 0.42857142857142857142857142857143
    The overhead for a 4/12 is 0.66666666666666666666666666666667



    Sincerely,


    William McCormick
    Reply With Quote  
     

  14. #13  
    Forum Senior
    Join Date
    Jul 2008
    Posts
    316
    Transmitting each bit in three copies is faaaaaaar from what can be optimally done: it adds 2 bits of redundancy per bit and with 1/100 error probability, it looses at average 2.4btis/1kB (0.01*0.01*0.99*3*8000), while optimal (presented) approach can add about 0.088 bits of redundancy per bit and shouldn't loose any bit.
    So using almost 3 times less transmitted data we get better correction.
    Reply With Quote  
     

  15. #14  
    Suspended
    Join Date
    Apr 2008
    Posts
    2,176
    Quote Originally Posted by Jarek Duda
    Transmitting each bit in three copies is faaaaaaar from what can be optimally done: it adds 2 bits of redundancy per bit and with 1/100 error probability, it looses at average 2.4btis/1kB (0.01*0.01*0.99*3*8000), while optimal (presented) approach can add about 0.088 bits of redundancy per bit and shouldn't loose any bit.
    So using almost 3 times less transmitted data we get better correction.
    Since you are sending two extra packets of information. The chances of two errors in consecutive packets, go up. And you can always use another form of correction on the data from all three damaged packets, to correct it.

    But I understand what you are saying.

    To me if you are going for speed, and have room for error. This might be the way to go. You might be able to compress your communication as much as 95 percent. And still send it three times or more, with far less overhead.

    http://www.multicom.uwaterloo.ca/fil...llisCoding.pdf

    Don't hold me to anything in that link. Ha-ha.

    This was a form of communications used in telephone modems years ago. Hayes in particular used this method.

    I was amazed at the speed of two Hayes modems connected to one another over standard non-dedicated phone lines.

    But they say the U.S. Robotics modem's put through even more.



    Sincerely,


    William McCormick
    Reply With Quote  
     

  16. #15  
    Forum Senior
    Join Date
    Jul 2008
    Posts
    316
    Please look at my paper or the first posts on this topic: as in Hamming or tripling each bit - If the message is divided into independent blocks, in large message we will always loose some of them because of pessimistic cases. To make that probability of it drops asymptotically to 0 we cannot make such division - we have to use potentially infinite blocks. Information theoreticians are aware of it, but the only know approach - LDPC needs exponential decoding time, so is rather used only in desperate situations like deep space missions. Presented approach is finally practical that means it may be commonly used in all kind of electronics increasing transfer rates and reliability.
    Reply With Quote  
     

  17. #16  
    Suspended
    Join Date
    Apr 2008
    Posts
    2,176
    Quote Originally Posted by Jarek Duda
    Please look at my paper or the first posts on this topic: as in Hamming or tripling each bit - If the message is divided into independent blocks, in large message we will always loose some of them because of pessimistic cases. To make that probability of it drops asymptotically to 0 we cannot make such division - we have to use potentially infinite blocks. Information theoreticians are aware of it, but the only know approach - LDPC needs exponential decoding time, so is rather used only in desperate situations like deep space missions. Presented approach is finally practical that means it may be commonly used in all kind of electronics increasing transfer rates and reliability.
    I was reading it.

    I am just at a loss, as to how you are going to divide between time lag and missed information. In other words if you don't wait and send three consecutive identical packets. You will not be able to correct a single missing packet.

    But if you send three you would need three times more interference to lose one packet. Plus the higher odds of getting three bad packets in a row.

    Ideally, keeping the packets small, you would want to send three different source packets. Three times each, but send one of each of the three source packets in procession. And repeat that three times.

    So you would send packet 1,2,3 1,2,3 1,2,3, and on the other end just match two and use them. If there are no matches, use one based on an algorithm.

    But I am not an expert. I kind of thought this was beat to death already.

    Sincerely,


    William McCormick
    Reply With Quote  
     

  18. #17  
    Forum Senior
    Join Date
    Jul 2008
    Posts
    316
    No - in this approach we don't send three copies - it's completely different method.
    We are sending message once, but with inserted uniformly distributed redundancy using some entropy coder with additional forbidden symbol.

    ps. I’ve been asked to precise the correction algorithm - there is new version of the paper available (with a few required corrections) - the same link.

    pss. I've just spotted that there is a problem with combining data correction with the method of enlarging internal state in which we switch some youngest bits - these bits are immediately transferred and be used in the next cycle - we would have to locate them...
    To make correction simpler, there should be changed order: step of coding is bit transfer then switching the youngest bit (or a few) of this reduced intermediate state then encode the symbol.
    To make that this bit switch doesn't lead outside I_s, we have to use l_s being even numbers (or correspondingly higher powers of 2).
    Reply With Quote  
     

  19. #18  
    Forum Senior
    Join Date
    Jul 2008
    Posts
    316
    The simulator of correction process has just been published on Wolfram's page:
    http://demonstrations.wolfram.com/CorrectionTrees/
    Is shows that we finally have near Shanon's limit method working in nearly linear time for any noise level.

    For given probability of bit damage (p_b), we choose p_d parameter. The higher this parameter is, the more redundancy we add, the easier to correct errors.
    We want to find the proper correction (red path in simulator). The main correction mechanism is that if we are expanding the proper correction - everything is fine, but in each step of expanding a wrong correction, we have p_d

    probability of realizing it. With p_d large enough, the number of corrections we should check doesn't longer grow exponentially.
    At each step there is known tree structure and using it we choose the most probable leaf to expand.

    I've realized that for practical correction methods (not requiring exponential correction time), we rather need a bit more redundancy than theoretical (Shannon's) limit. Redundancy allows to reduce the number of corrections to

    consider. In practical correction methods we rather have to elongate corrections and so we have to assume that the expected number of corrections up to given moment is finite, what requires more redundancy than Shannon's limit (observe

    that block codes fulfills this assumption).
    This limit is calculated in the last version of the paper (0902.0271). The basic correction algorithm (as in the simulator) works for a bit worse limit (needs larger encoded file by at most 13%), but it can probably be improved.
    Finally this new family of random trees has two phase transitions - for small p_d < p_d^0 the tree will immediately expand exponentially. For p_d^0 < p_d < p_d^2 the tree has generally small width, but rare high error concentrations

    make that its expected width is infinite (like long tail in probability distribution). For p_d>p_d^2 it has finite expected width.

    Used today error correction methods works practically only for very low noise (p_b < 0.01). Presented approach works well for any noise (p_b < 0.5).
    For small noises it needs size of encoded file practically like for Shannon's limit. The difference starts for large noises: it needs file size at most twice larger than the limit.
    Practical method for large noises give new way to increase capacity of transmition lines and storage devices - for example place two bits where we would normally place one - the cost is large noise increase, but we can handle it now.

    For extremely large noises, we can no longer use ANS. On fig. 3 of the paper is shown how to handle it. For example if we have to increase the size of the file 100 times, we can encode each bit in 100 bits - encode 1 as (11...111 XOR

    'hash value of already encoded message'. The same with 0. Now while creating the tree, each split will have different number of corrected bits - different weight.
    Reply With Quote  
     

  20. #19 Last update 
    Forum Senior
    Join Date
    Jul 2008
    Posts
    316
    I defended my PhD in computer science (now in physics in progress ), which half was was about this new approach to data correction - basically it's extended convolutional codes concept to use much larger states (which require working not on the whole space of states as usual, but only on used tree of states and allows to practically complete repair in linear time) and using entropy coder to add redundancy (simultaneous data compression and rate can be changed fluently).
    The thesis is the paper from arxiv with a few small improvements ( can be downloaded from http://tcs.uj.edu.pl/graduates.php?degree=1&lang=0 )
    Here is presentation I've used - with a few new pictures which could make understanding concepts easier (and asymmetric numeral systems) and e.g. some comparison to LDPC.
    Reply With Quote  
     

  21. #20  
    Forum Senior
    Join Date
    Jul 2008
    Posts
    316
    I apology for digging this thread up, but finally there is practical implementation and it beats modern state of arts methods in many application.
    It can be seen as greatly improved convolutional code-like concept – for example no longer using convolution, but carefully designed extremely fast operation allowing to work on much larger states instead. Other main improvements are using bidirectional decoding and heap (logarithmic complexity) instead of stubbornly used stack (linear complexity). For simplicity it will be called Correction Trees (CT).
    The most important improvement is that it can handle larger channel damage for the same rate. Adding redundancy to double (rate 1/2) or triple (rate 1/3) the message size theoretically should allow to completely repair up to correspondingly 11% or 17.4% damaged bits for Binary Symmetric Channel (each bit has independently this probability to be flipped). Unfortunately, this Shannon limit is rather unreachable - in practice we can only reduce Bit Error Rate (BER) if noise is significantly lower than this limit. Turbo Codes (TC) and Low Density Parity Check (LDPC) are nowadays seen as teh best methods – here is comparison of some their implementations with CT approach – output BER to input noise level:



    We can see that CT still repairs when the others has given up – making it perfect for extreme application like far space or underwater communication. Unfortunately repairing such extreme noises requires extreme resources – software implementation on modern PC decodes only a few hundreds bytes per second for extreme noises. Additionally, using more resources the correction capability can be further improved (lowering line in the figure above).
    From the other side, CT encoding is extremely fast and correction for low noises is practically for free – like up to 5-6% for rate ½. In comparison, TC correction always requires a lot of calculation, while LDPC additionally requires also a lot of work for encoding only.
    So in opposite to them, CT is just perfect for e.g. hard discs – everyday work uses low noise, so using CT would make it extremely cheap. From the other hand, if it is really badly damaged, there is still correction possible but it becomes costly. Such correction itself could be also made outside, allowing for further improvement of correction capabilities.

    Paper: [1204.5317] Correction Trees as an Alternative to Turbo Codes and Low Density Parity Check Codes
    Implementation: https://indect-project.eu/correction-trees/
    Simulator: Wolfram Demonstrations Project
    Reply With Quote  
     

Bookmarks
Bookmarks
Posting Permissions
  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •