raptor codes

35
Raptor Codes Amin Shokrollahi EPFL

Upload: ilar

Post on 12-Jan-2016

80 views

Category:

Documents


0 download

DESCRIPTION

Amin Shokrollahi. EPFL. Raptor Codes. BEC(p 1 ). BEC(p 2 ). BEC(p 3 ). BEC(p 4 ). BEC(p 5 ). BEC(p 6 ). Communication on Multiple Unknown Channels. Example: Popular Download. Example: Peer-to-Peer. Example: Satellite. The erasure probabilities are unknown . - PowerPoint PPT Presentation

TRANSCRIPT

  • Raptor Codes

  • Communication on Multiple Unknown Channels

  • Example: Popular Download

  • Example: Peer-to-Peer

  • Example: Satellite

  • The erasure probabilities are unknown.

    Want to come arbitrarily close to capacity on each of the erasure channels, with minimum amount of feedback.

    Traditional codes dont work in this setting since their rate is fixed.Need codes that can adapt automatically to the erasure rate of the channel.

  • Fountain CodesSender sends a potentially limitless stream of encoded bits. Receivers collect bits until they are reasonably sure that they can recover the content from the received bits, and send STOP feedback to sender.Automatic adaptation: Receivers with larger loss rate need longer to receive the required information.Want that each receiver is able to recover from the minimum possible amount of received data, and do this efficiently.

  • Universality and Efficiency[Universality] Want sequences of Fountain Codes for which the overhead is arbitrarily small

    [Efficiency] Want per-symbol-encoding to run in close to constant time, and decoding to run in time linear in number of output symbols.

  • LT-Codes Invented by Michael Luby in 1998.

    First class of universal and almost efficient Fountain Codes

    Output distribution has a very simple form

    Encoding and decoding are very simple

  • LT-Codes

  • 2Insert header, and sendXORChoose 2Random originalsymbolsInput symbolsThe LT Coding Process

  • LT-lightTraditional pre-codeRaptor Codes

  • Further ContentHow do Raptor Codes designed for the BEC perform on other symmetric channels (with BP decoding)?

    Information theoretic bounds, and fraction of nodes of degrees one and two in capacity-achieving Raptor Codes.

    Some examples

    Applications

  • Parameters Measure residual error probability as a function of the overhead for a given channel.

  • Incremental Redundancy CodesRaptor codes are true incremental redundancy codes.

    A sender can generate as many output bits as necessary for successful decoding.

    Suitably designed Raptor codes are close to the Shannon capacity for a variety of channel conditions (from very good to rather bad).

    Raptor codes are competitive with the best LDPC codes under different channel conditions.

  • Sequences Designed for the BECSimulations done on AWGN() for various

  • Sequences Designed for the BEC0.0670.1350.1940.2670.3310.3910.4590.5220.5840.6500.000ThresholdTurboNormalized SNREb/N0Best designs (so far)

  • Sequences Designed for the BECNot too bad, but quality decreases when the amount of noise on the channel increases.

    Need to design better distributions.

    Idea: adapt the Gaussian approximation technique of Chung, Richardson, and Urbanke.

  • Gaussian ApproximationAssume that the messages sent from input to output nodes are Gaussian.

    Track the mean of these Gaussians from one round to another.Degree distributions can be designed using this approximation.

    However, they dont perform that well.

    Anything else we can do with this?

  • Nodes of Degree 2 Use equality, differentiate, and compare values at 0whereIt can be rigorously proved that above condition is necessary for error probability of BP to converge to zero.

  • Use graph induced on input symbols by output symbols of degree 2.Nodes of Degree 2

  • Nodes of Degree 2: BEC

  • Information Loss!Nodes of Degree 2: BEC

  • Fraction of Nodes of Degree 2

  • The -ary symmetric channel (large )Double verification decoding (Luby-Mitzenmacher):If and are correct, then they verify . Remove all of them from graph and continue.Can be shown that number of correct output symbols needs to be at leastTimes number of input symbols. (Joint work with Karp and Luby.)

  • The -ary symmetric channel (large )More sophisticated algorithms: induced graph!If two input symbols are connected by a correct output symbol, and each of them is connected to a correct output symbol of degree one, then the input symbols are verified. Remove from them from graph.

  • The -ary symmetric channel (large )More sophisticated algorithms: induced graph!More generally: if there is a path consisting of correct edges, and the two terminal nodes are connected to correct output symbols of degree one, then the input symbols get verified. (More complex algorithms.)

  • The -ary symmetric channel (large )Limiting case: Giant component consisting of correct edges, two correct output symbols of degree one poke the component. So, ideal distribution achieves capacity.

  • Nodes of Degree 2 What is the fraction of nodes of degree 2 for capacity-achieving Raptor Codes?

    Turns out, that in the limit we need to have equality:where, in generaland is the LLR of the channel.

  • General Symmetric Channels: Mimic ProofProof is information theoretic: if fraction of nodes of degree 2 is larger by a constant, then :

    Expectation of the hyperbolic tangent of messages passed from input to output symbols at given round of BP is larger than a constant.

    This shows that

    So code cannot achieve capacity.

  • General Symmetric Channels: Mimic ProofFraction of nodes of degree one for capacity-achieving Raptor Codes:

  • A better Gaussian ApproximationUses adaptation of a method of Ardakani and Kschischang.

    Heuristic: Messages passed from input to output symbols are Gaussian, but not vice-versa.

    Find recursion for the means of these Gaussians, and apply linear programming.

  • Raptor Codes for any ChannelFrom the formula on the fraction of nodes of degree 2 we know that it is not possible to design truly universal Raptor codes.

    However, how does a Raptor code designed for a binary input memoryless symmetric channel perform for some other channel of this type?

    The question has been answered (joint work with O. Etesamit): if a code achieves capacity on the BEC, then its overhead over any binary input memoryless symmetric channel is not larger than log(e)-1 = 0.442

  • ConclusionsFor LT- and Raptor codes, some decoding algorithms can be phrased directly in terms of subgraphs of graphs induced by output symbols of degree 2. This leads to a simpler analysis without the use the tree assumption.For the BEC, and for the q-ary symmetric channel (large q) we obtain essentially the same limiting capacity-achieving degree distribution, using the giant component analysis.An information theoretic analysis gives the optimal fraction of output nodes of degree 2 for general memoryless symmetric channels.A graph analysis reveals very good degree distributions, which perform very well experimentally.