Table Of ContentProportionate-type Normalized Least Mean Square Algorithms
www.it-ebooks.info
FOCUS SERIES
Series Editor Francis Castanié
Proportionate-type
Normalized Least Mean
Square Algorithms
Kevin Wagner
Miloš Doroslovački
www.it-ebooks.info
Firstpublished2013inGreatBritainandtheUnitedStatesbyISTELtdandJohnWiley&Sons,Inc.
Apart from any fair dealing for the purposes of research or private study, or criticism or review, as
permittedundertheCopyright,DesignsandPatentsAct1988,thispublicationmayonlybereproduced,
storedortransmitted,inanyformorbyanymeans,withthepriorpermissioninwritingofthepublishers,
or in the case of reprographic reproduction in accordance with the terms and licenses issued by the
CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the
undermentionedaddress:
ISTELtd JohnWiley&Sons,Inc.
27-37StGeorge’sRoad 111RiverStreet
LondonSW194EU Hoboken,NJ07030
UK USA
www.iste.co.uk www.wiley.com
©ISTELtd2013
TherightsofKevinWagnerandMilošDoroslovačkitobeidentifiedastheauthorsofthisworkhave
beenassertedbytheminaccordancewiththeCopyright,DesignsandPatentsAct1988.
LibraryofCongressControlNumber: 2013937864
BritishLibraryCataloguing-in-PublicationData
ACIPrecordforthisbookisavailablefromtheBritishLibrary
ISSN:2051-2481(Print)
ISSN:2051-249X(Online)
ISBN:978-1-84821-470-5
PrintedandboundinGreatBritainbyCPIGroup(UK)Ltd.,Croydon,SurreyCR04YY
www.it-ebooks.info
Contents
PREFACE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
NOTATION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
ACRONYMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
CHAPTER 1.INTRODUCTION TOPTNLMSALGORITHMS . .. . . . .. . 1
1.1.ApplicationsmotivatingPtNLMSalgorithms . . . . . . . . . . . . . . . 1
1.2.HistoricalreviewofexistingPtNLMSalgorithms . . . . . . . . . . . . 4
1.3.UnifiedframeworkforrepresentingPtNLMSalgorithms . . . . . . . . 6
1.4.Proportionate-typeNLMSadaptivefilteringalgorithms . . . . . . . . . 8
1.4.1.Proportionate-typeleastmeansquarealgorithm . . . . . . . . . . . 8
1.4.2.PNLMSalgorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4.3.PNLMS++algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.4.4.IPNLMSalgorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.4.5.IIPNLMSalgorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.4.6.IAF-PNLMSalgorithm . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.4.7.MPNLMSalgorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.4.8.EPNLMSalgorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.5.Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
CHAPTER 2.LMSANALYSISTECHNIQUES . . . . . . . . . . . . . . . . . . 13
2.1.LMSanalysisbasedonsmalladaptationstep-size . . . . . . . . . . . . 13
2.1.1.StatisticalLMStheory: smallstep-sizeassumptions . . . . . . . . . 13
2.1.2.LMSanalysisusingstochasticdifferenceequations
withconstantcoefficients . . . . . . . . . . . . . . . . . . . . . . . . 14
2.2.LMSanalysisbasedonindependentinputsignalassumptions . . . . . 18
2.2.1.StatisticalLMStheory: independentinputsignalassumptions . . . 18
www.it-ebooks.info
vi PtNLMSAlgorithms
2.2.2.LMSanalysisusingstochasticdifferenceequationswith
stochasticcoefficients . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.3.PerformanceofstatisticalLMStheory . . . . . . . . . . . . . . . . . . . 24
2.4.Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
CHAPTER 3.PTNLMS ANALYSISTECHNIQUES . . . . . . . . . . . . . . . 29
3.1.TransientanalysisofPtNLMSalgorithmforwhiteinput . . . . . . . . 29
3.1.1.LinkbetweenMSWDandMSE . . . . . . . . . . . . . . . . . . . . 30
3.1.2.RecursivecalculationoftheMWDandMSWD
forPtNLMSalgorithms . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.2.Steady-stateanalysisofPtNLMSalgorithm: bias
andMSWDcalculation . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.3.ConvergenceanalysisofthesimplifiedPNLMSalgorithm . . . . . . . 37
3.3.1.Transienttheoryandresults . . . . . . . . . . . . . . . . . . . . . . . 37
3.3.2.Steady-statetheoryandresults . . . . . . . . . . . . . . . . . . . . . 46
3.4.ConvergenceanalysisofthePNLMSalgorithm. . . . . . . . . . . . . . 47
3.4.1.Transienttheoryandresults . . . . . . . . . . . . . . . . . . . . . . . 48
3.4.2.Steady-statetheoryandresults . . . . . . . . . . . . . . . . . . . . . 53
3.5.Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
CHAPTER 4.ALGORITHMSDESIGNED BASED ON MINIMIZATION
OFUSER-DEFINEDCRITERIA . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.1.PtNLMSalgorithmswithgainallocationmotivated
byMSEminimizationforwhiteinput . . . . . . . . . . . . . . . . . . . 57
4.1.1.OptimalgaincalculationresultingfromMMSE . . . . . . . . . . . 58
4.1.2.Water-fillingalgorithmsimplifications . . . . . . . . . . . . . . . . . 62
4.1.3.Implementationofalgorithms. . . . . . . . . . . . . . . . . . . . . . 63
4.1.4.Simulationresults . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
4.2.PtNLMSalgorithmobtainedbyminimizationofMSE
modeledbyexponentialfunctions . . . . . . . . . . . . . . . . . . . . . 68
4.2.1.WDforproportionate-typesteepestdescentalgorithm . . . . . . . . 69
4.2.2.Water-fillinggainallocationforminimizationoftheMSE
modeledbyexponentialfunctions . . . . . . . . . . . . . . . . . . . 69
4.2.3.Simulationresults . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
4.3.PtNLMSalgorithmobtainedbyminimizationoftheMSWD
forcoloredinput . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.3.1.Optimalgainalgorithm . . . . . . . . . . . . . . . . . . . . . . . . . 76
4.3.2.RelationshipbetweenminimizationofMSEandMSWD . . . . . . 81
4.3.3.Simulationresults . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
4.4.Reducedcomputationalcomplexitysuboptimalgainallocation
forPtNLMSalgorithmwithcoloredinput . . . . . . . . . . . . . . . . . 83
4.4.1.Suboptimalgainallocationalgorithms . . . . . . . . . . . . . . . . . 84
4.4.2.Simulationresults . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
4.5.Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
www.it-ebooks.info
Contents vii
CHAPTER 5.PROBABILITYDENSITYOF WDFOR PTLMS
ALGORITHMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
5.1.Proportionate-typeleastmeansquarealgorithms . . . . . . . . . . . . . 91
5.1.1.Weightdeviationrecursion . . . . . . . . . . . . . . . . . . . . . . . 91
5.2.DerivationoftheconditionalPDFforthePtLMSalgorithm. . . . . . . 92
5.2.1.ConditionalPDFderivation . . . . . . . . . . . . . . . . . . . . . . . 92
5.3.ApplicationsusingtheconditionalPDF . . . . . . . . . . . . . . . . . . 100
5.3.1.Methodologyforfindingthesteady-statejointPDF
usingtheconditionalPDF . . . . . . . . . . . . . . . . . . . . . . . 101
5.3.2.Algorithmbasedonconstrainedmaximization
oftheconditionalPDF. . . . . . . . . . . . . . . . . . . . . . . . . . 104
5.4.Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
CHAPTER 6.ADAPTIVESTEP-SIZE PTNLMS ALGORITHMS . . . . . . . 113
6.1.Adaptationofµ-lawforcompressionofweightestimates
usingtheoutputsquareerror . . . . . . . . . . . . . . . . . . . . . . . . 113
6.2.AMPNLMSandAEPNLMSsimplification . . . . . . . . . . . . . . . . 114
6.3.Algorithmperformanceresults . . . . . . . . . . . . . . . . . . . . . . . 116
6.3.1.LearningcurveperformanceoftheASPNLMS,AMPNLMS
andAEPNLMSalgorithmsforawhiteinputsignal . . . . . . . . . 116
6.3.2.LearningcurveperformanceoftheASPNLMS,AMPNLMS
andAEPNLMSalgorithmsforacolorinputsignal. . . . . . . . . . 117
6.3.3.LearningcurveperformanceoftheASPNLMS,AMPNLMS
andAEPNLMSalgorithmsforavoiceinputsignal . . . . . . . . . 117
6.3.4.Parametereffectsonalgorithms . . . . . . . . . . . . . . . . . . . . 119
6.4.Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
CHAPTER 7.COMPLEXPTNLMS ALGORITHMS . . . . . . . . . . . . . . 125
7.1.Complexadaptivefilterframework . . . . . . . . . . . . . . . . . . . . . 126
7.2.cPtNLMSandcPtAPalgorithmderivation . . . . . . . . . . . . . . . . 126
7.2.1.Algorithmsimplifications . . . . . . . . . . . . . . . . . . . . . . . . 129
7.2.2.Alternativerepresentations . . . . . . . . . . . . . . . . . . . . . . . 131
7.2.3.StabilityconsiderationsofthecPtNLMSalgorithm . . . . . . . . . 131
7.2.4.Calculationofstepsizecontrolmatrix . . . . . . . . . . . . . . . . . 132
7.3.Complexwater-fillinggainallocationalgorithm
forwhiteinputsignals: onegainpercoefficientcase . . . . . . . . . . . 133
7.3.1.Derivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
7.3.2.Implementation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
7.4.Complexcoloredwater-fillinggainallocationalgorithm:
onegainpercoefficientcase. . . . . . . . . . . . . . . . . . . . . . . . . 136
7.4.1.Problemstatementandassumptions . . . . . . . . . . . . . . . . . . 136
7.4.2.OptimalgainallocationresultingfromminimizationofMSWD . . 137
www.it-ebooks.info
viii PtNLMSAlgorithms
7.4.3.Implementation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
7.5.Simulationresults . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
7.5.1.cPtNLMSalgorithmsimulationresults . . . . . . . . . . . . . . . . 139
7.5.2.cPtAPalgorithmsimulationresults. . . . . . . . . . . . . . . . . . . 141
7.6.TransformdomainPtNLMSalgorithms . . . . . . . . . . . . . . . . . . 144
7.6.1.Derivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
7.6.2.Implementation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
7.6.3.Simulationresults . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
7.7.Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
CHAPTER 8.COMPUTATIONALCOMPLEXITYFOR PTNLMS
ALGORITHMS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
8.1.LMScomputationalcomplexity . . . . . . . . . . . . . . . . . . . . . . 153
8.2.NLMScomputationalcomplexity . . . . . . . . . . . . . . . . . . . . . 154
8.3.PtNLMScomputationalcomplexity . . . . . . . . . . . . . . . . . . . . 154
8.4.ComputationalcomplexityforspecificPtNLMSalgorithms . . . . . . 155
8.5.Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
CONCLUSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
APPENDIX1.CALCULATIONOFβ(0),β(1) AND β(2) . . . . . . . . . . . . 161
i i,j i
APPENDIX2.IMPULSERESPONSELEGEND . . . . . . . . . . . . . . . . . . 167
BIBLIOGRAPHY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
INDEX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
www.it-ebooks.info
Preface
Aimsofthisbook
Theprimarygoalofthisbookistoimpartadditionalcapabilitiesandtoolstothe
fieldofadaptivefiltering.Alargepartofthisbookdealswiththeoperationofadaptive
filters when the unknown impulse response is sparse. A sparse impulse response is
oneinwhichonlyafewcoefficientscontainthemajorityofenergy.Inthiscase, the
algorithmdesignerattemptstousetheaprioriknowledgeofsparsity.Proportionate-
type normalized least mean square (PtNLMS) algorithms attempt to leverage this
knowledgeofsparsity.However,anidealalgorithmwouldberobustandcouldprovide
superior channel estimation in both sparse and non-sparse (dispersive) channels. In
addition,itwouldbepreferableforthealgorithmtoworkinbothstationaryandnon-
stationaryenvironments.Takingallthesefactorsintoconsideration,thisbookattempts
toaddtothe stateofthe artinPtNLMS algorithmfunctionalityforall thesediverse
conditions.
Organizationofthisbook
Chapter1introducestheframeworkofthePtNLMSalgorithm.Areviewofprior
workperformedinthefieldofadaptivefilteringispresented.
Chapter 2 describes classic techniques used to analyze the steady-state and
transientregimesoftheleastmeansquare(LMS)algorithm.
In Chapter 3, a general methodology is presented for analyzing steady-state and
transient analysis of an arbitrary PtNLMS algorithm for white input signals. This
chapterbuildsonthepreviouschapterandexaminesthattheusabilityandlimitations
ofassumingtheweightdeviationsareGaussian.
InChapter4,severalnewalgorithmsarediscussedwhichattempttochooseagain
atanytimeinstantthatwillminimizeuser-definedcriteria,suchasmeansquareoutput
error and mean square weight deviation. The solution to this optimization problem
www.it-ebooks.info
x PtNLMSAlgorithms
resultsinawater-fillingalgorithm.Thealgorithmsdescribedarethentestedinawide
varietyofinputaswellasimpulsescenarios.
In Chapter 5, an analytic expression for the conditional probability density
function of the weight deviations, given the preceding weight deviations, is derived.
This joint conditional probability density function is then used to derive the
steady-state joint probability density function for weight deviations under different
gainallocationlaws.
In Chapter 6, a modification of the µ-law PNLMS algorithm is introduced.
Motivated by minimizing the mean square error (MSE) at all times, the adaptive
step-size algorithms described in this chapter are shown to exhibit robust
convergenceproperties.
In Chapter 7, the PtNLMS algorithm is extended from real-valued signals to
complex-valuedsignals.Inaddition,severalsimplificationsofthecomplexPtNLMS
algorithm are proposed and so are their implementations. Finally, complex
water-fillingalgorithmsarederived.
InChapter8,thecomputationalcomplexitiesofalgorithmsintroducedinthisbook
arecomparedtoclassicalgorithmssuchasthenormalizedleastmeansquare(NLMS)
andproportionatenormalizedleastmeansquare(PNLMS)algorithms.
www.it-ebooks.info
Notation
The following notation is used throughout this book. Vectors are denoted by
boldfacelowercaseletters,suchasx.Allvectorsarecolumnvectorsunlessexplicitly
statedotherwise.ScalarsaredenotedbyRomanorGreekletters,suchasxorν.The
ith component of vector x is given by x . Matrices are denoted by boldface capital
i
letters, suchasA.The(i,j)thentryofanymatrixAisdenotedas[A] ≡ a .We
ij ij
frequentlyencountertime-varyingvectorsinthisbook.Avectorattimekisgivenby
x(k). For notational convenience, this time indexing is often suppressed so that the
notation x implies x(k). Additionally, we use the definitions x+ ≡ x(k + 1) and
x− ≡x(k−1)torepresentthevectorxattimesk+1andk−1,respectively.
ForvectorawithlengthL,wedefinethefunctionDiag{a}asanL×Lmatrix
whosediagonalentriesaretheLelementsofaandallotherentriesarezero.Formatrix
A, we define the function diag{A} as a column vector containing the L diagonal
entries from A. For matrices, Re{A} and Im{A} represent the real and imaginary
partsofthecomplexmatrixA.
Thelistofnotationisgivenbelow.
x avector
x ascalar
A amatrix
x theithentryofvectorx
i
[A] ≡a the(i,j)thentryofanymatrixA
ij ij
Diag{a} adiagonalmatrixwhosediagonalentriesarethe
elementsofvectora
diag{A} acolumnvectorwhoseentriesarethediagonal
elementsofmatrixA
I identitymatrix
E{x} expectedvalueofrandomvectorx
www.it-ebooks.info