Starting from:

$25

COM 5220 - Adaptive Signal Processing - Homework 5   - Solved

Implement an adaptive interference cancellation system with the least mean squares (LMS) algorithm described in Chapter 4, the normalized LMS (NLMS) algorithm described in Chapter 5, or the variable-step LMS (VS-LMS) algorithm described in Chapter 5. The adaptive system is used to cancel interference, i n( ) , contained in a primary signal d n( ) , as depicted in Fig. 1. The primary signal serves as the desired signal for the adaptive system. The input signal is a filtered version of i n( ) , i.e., x n( ) = i n( )h n( ), where  denotes the convolution operator.

 

◼ Figure 1 Adaptive interference canceling.

(1)  For the LMS algorithm, the update equation is

fn+1 = +fn e n( )xn

      where  is the step size, e n( ) = d n( ) − y n( ) , and y n( ) =f xn nT  with

 fn =[ (0)fn               fn(1)     f Ln( −1)]T xn =[ (0)xn     xn(1)     x Ln( −1)]T

and L being the adaptive filter length.

(2)  For the NLMS algorithm, the update equation is

 +       e n( )xn fn 1 = +fn c+x xTn n

where c is a small positive constant.

(3)  For the VS-LMS algorithm, the update equation is

                                                                                     fn+1 = +fn e n( )M xn n

where Mn is an L L diagonal matrix with

                                                              Mn = diag{  0( )n , 1( )n ,..., L−1( )}n         .

The i ( )n ’s are adjusted according to the following rule:

² For n= 0,  i ( )n = max, i = 0, 1,..., L−1.

² For n0 ,  i ( )n = i( )n c1 (with c1 1) if e n x n( ) ( −i) has N1 successive sign changes and  i( )n = i( )n c2 (with c2 1) if e n x n( ) ( −i) has no sign changes for N2 successive updates, where   min  i( )n  max .

System Specifications: 

⚫  Assume f0 =0.

⚫  The information signal s n( ) =1 with P s n{ ( ) =+ =1}  0.5 and P s n{ ( ) =− =1}         0.5.

⚫  i n( ) is generated from a uniformly distributed random variable with range [ 1− , 1].

⚫  h n( ) is a five-point impulse response, i.e., h(0) = 0.227 , h(1) = 0.46 , h(2) = 0.688 , h(3) = 0.46, and h(4) = 0.227 .

⚫  The adaptive filter length is L= 6.

Parameter Settings: 

⚫  For the LMS algorithm,  is determined by yourself.

⚫  For the NLMS algorithm,  is determined by yourself and c =10−3 .

⚫  For the VS-LMS algorithm, c1 = 0.9, c2 =1.1, and N1 = =N2 3. Also, max and min are determined by yourself.

Simulation Assignments: 

Generate 12,000 samples for each test data, i.e., s n( ) , i n( ) , and x n( ) , in each trial. The performance of each algorithm is examined by the bit error rate (BER) and the average squared error for different averaging intervals.

(1)  Plot the learning curve, i.e., | ( )|e n 2 vs. the number of iterations for each algorithm. Note that the learning curve is obtained by averaging the results over 100 trials.

(2)  Calculate BERs 1 and 2 of the adaptive system for different adaptive algorithms, where

BER 1 is evaluated from iterations 101 to 12000 and BER 2 is from iterations 1001 to

12000 in each trial. The final BER for each case is obtained by averaging the results over 100 trials.

                                  Pe1( )i = 12000n=101{sgn( ( ))e ni  s ni ( )}, i =1,2,...,100 


 Pe2( )i =           1 100112000n=1001{sgni ( ( ))e n  s ni ( )}, i =1,2,...,100 12000−


 Pe1 = 100 100i=1 Pe1( )i ; Pe2 = 1001 100i=1 Pe2( )i    1

(3) Calculate the average squared error of the adaptive system for different adaptive algorithms. That is, calculate (1/ M)n[ ( )s n e ni − i( )]2 in the ith trial, where M is the averaging interval. The squared errors are averaged from iterations 101 to 12000 and from iterations 1001 to 12000, respectively, in each trial. The final average squared error for each case is obtained by averaging the results over 100 trials.

e1( )i = 12000 1− 12000n=101[ ( )s ni          −e ni ( )]2,  i =1,2,...,100  101

                                                                                                                                          
                         e2( )i = 12000 1 100112000n=1001[ ( )s ni −e ni ( )]2,  i =1,2,...,100



                                                                                                                                          
                          e1 = 100 1 100i=1 e1( )i ; e2 = 100 1 100i=1 e2( )i             

(4) What conclusions can you make about performance comparisons of the three algorithms from your simulation results?


More products