Lab Home | Phone | Search
Center for Nonlinear Studies  Center for Nonlinear Studies
 Home 
 People 
 Current 
 Affiliates 
 Visitors 
 Students 
 Research 
 ICAM-LANL 
 Publications 
 Conferences 
 Workshops 
 Sponsorship 
 Talks 
 Colloquia 
 Colloquia Archive 
 Seminars 
 Postdoc Seminars Archive 
 Quantum Lunch 
 Quantum Lunch Archive 
 CMS Colloquia 
 Q-Mat Seminars 
 Q-Mat Seminars Archive 
 P/T Colloquia 
 Archive 
 Kac Lectures 
 Kac Fellows 
 Dist. Quant. Lecture 
 Ulam Scholar 
 Colloquia 
 
 Jobs 
 Postdocs 
 CNLS Fellowship Application 
 Students 
 Student Program 
 Visitors 
 Description 
 Past Visitors 
 Services 
 General 
 
 History of CNLS 
 
 Maps, Directions 
 CNLS Office 
 T-Division 
 LANL 
 
Tuesday, October 19, 2010
10:30 AM - 11:30 AM
CNLS Conference Room (TA-3, Bldg 1690)

Seminar

LP Decoding: Thresholds, Connections, and Extensions

Henry Pfister
School of Engineering at Texas A&M University

Applying the max-product (and belief-propagation) algorithms to loopy graphs is now quite popular for constraint satisfaction problems. This is largely due to their low computational complexity and impressive performance in practice. Still, there is no general understanding of the conditions required for convergence and/or the optimality of converged solutions. This paper presents an analysis of the attenuated max-product (a.k.a. weighted min-sum) algorithm that was applied, by Koetter and Frey, to the decoding of LDPC codes. They showed that this algorithm always converges to a fixed point when the weight $\beta$ is sufficiently small, and that a consistent fixed point must be a maximum-likelihood (ML) solution. Unfortunately, their value $1/\beta > (d_v - 1) (d_c -1)$ for $(d_v,d_c)$-regular LDPC codes is too small to be useful. In this paper, we show that $1/\beta > d_v -1$ suffices and that a consistent fixed point must be both the linear-programming (LP) and maximum-likelihood (ML) solution. The improvement is interesting because it enables one to use the technique of density evolution to identify the phase transition (i.e., noise threshold) between successful decoding and unsuccessful decoding. A similar phase transition was first identified by Arora et al. and both appear, in some sense, to be newly observed phenomena in the decoding of LDPC codes. A counterexample is also given that shows the min-sum algorithm (without weighting) can converge to a codeword that is not an ML solution. Next, we discuss extensions of LP decoding to the joint decoding problem for channels with memory. This extension allows a rigorous definition of LP joint-decoding pseudo-codewords (JD-PCWs) and enables evaluation of the pairwise error probability between codewords and JD-PCWs. This leads naturally to a provable upper bound on decoder failure probability allows maximum-likelihood (ML) certificates in some cases. Finally, an iterative algorithm is introduced to solve the joint LP decoding problem. The dual-domain structure of the problem allows one to identify a low-complexity solution that takes advantage of sparsity of the parity-check matrix. The main advantage of this decoder is that it appears to provide the predictability of joint LP decoding with the computational complexity of turbo equalization.

Host: Misha Chertkov chertkov@lanl.gov