Starting from:

$30

ICSI401 Homework 4-   Partial pivoting Solved

4.1       Partial pivoting
In this problem, you will review partial pivoting and the reasoning behind it.

1.    Circle the best explanation for the use of partial pivoting from the choices below.

(a)    Partial pivoting is used in Gaussian elimination to mitigate the effects of rounding errors in the matrix entries, resulting in an algorithm that is usually backward stable.

(b)    Partial pivoting is not used in Gaussian elimination at all, but is instead used to solve linear systems via Cramer’s rule.

(c)    Partial pivoting is used to transform the input to Gaussian elimination so as to speed up the solution of linear systems.

2.    Show the steps of Gaussian elimination with partial pivoting for the following matrix:

10−5

2



−1
1

−3

1
 
(4.1)
Make sure that you clearly indicate in each step what operation you are performing (e.g., which rows are swapped, which multiples of rows are added to which other rows) and the result of that operation. You should end up with an upper triangular matrix.

4.2        Least squares and curve fitting
This problem will give you practice in using least squares to fit polynomials to data. Consider the following dataset in Matlab, consisting of pairs of points (x,y):

4-1

4-2                                                   Homework 4: ICSI 401 – Numerical Methods

 

x     = [-3, -2, -1, 0, 1, 2, 3]’;

y     = [-12.529999999999999, -3.02, 0.49, 1.00, 1.51, 5.02, 14.529999999999999]’;

Suppose that you want to find a least squares fit of a degree-3 polynomial to this data. I.e., you want to determine coefficients c0,c1,c2,c3 such that c3x3 + c2x2 + c1x + c0 is a least-squares fit to the data. We will collect the coefficients into a vector ~c∗ = [c0,c1,c2,c3]T.

1.    Using Matlab, compute the Vandermonde matrix A associated with this problem and explain how you got it. Remember that least squares requires us to solve

                                                                                     ~c∗ = argmin .                                                               (4.2)

Hint: In Matlab, if you have some number of column vectors a,b,c... of the same length, then you can put them into a matrix as follows:

A = [a , b , c];

Furthermore, you can generate a column vector of all ones with a given length l by writing

ones(l)

Finally, given a vector x, you can raise each of its elements to a given power a by writing x.^a

2.    In Matlab, use the A that you computed and the normal equations approach to the least squares problem to compute the vector of coefficients ~c.

3.    In Matlab, use the norm function to compute the the error  .

For this problem, you are expected to turn in all Matlab code that you’ve written, and clearly indicate your final answers.

4.3          Eigenthings and the power method
1. Using Matlab, use the power method to calculate unit eigenvectors corresponding to the dominant two eigenvalues of the matrix

 6

2



−1
2 5 1
 
(4.3)
In particular, you should run the power method for 50 iterations.

As usual, include your Matlab code and output. Clearly indicate the two eigenvectors that are your final answer.

Hint: Think about how you might check your answer to make sure it’s accurate. You know that an eigenvector v must satisfy Av = λv for some λ, so one idea is to compute w = Av and make sure that w1/v1 = w2/v2 = w3/v3.

                                                          Homework 4: ICSI 401 – Numerical Methods                                                   4-3

 

4.4         Polynomial interpolation, Chebyshev points
1.    In general, if we are given points (x0,y0),...,(xn,yn), what is the minimum number d such that a unique polynomial with degree 6 d and passing through these points is guaranteed to exist?

2.    Let x0 = −1,x1 = 1,x2 = 2, and let y0 = 1,y1 = 2,y2 = 3. Write down the Lagrange interpolating polynomial that passes through the points (x0,y0),(x1,y1),(x2,y2).

3.    For the points in the previous problem, fill in the following divided difference table:

f[x0] f[x1]
f[x0,x1]
 
f[x2]
f[x1,x2]
f[x0,x1,x2]
In other words, what is expected is that you compute f[x0],f[x1],f[x0,x1], etc.

Hint: Remember that the point of divided differences is that they can be computed recursively. See the textbook for the definition and more information.

4.    Using the divided difference table above, write down the Newton interpolating polynomial for these points.

Hint: Your answer should be of the form a0 + a1(x − x0) + a2(x − x0)(x − x1). So what you need to do is calculate a0,a1,a2 from the divided differences table.

Also, note that the polynomial that you get from Newton interpolation should be the same as the polynomial that you get from Lagrange interpolation. So you can check your answer by plugging in some points into each polynomial and making sure that you get the same result from each.

4.5       Piecewise polynomial interpolation
1. Do Exercise 8.8.9 in the book (on piecewise polynomial interpolation).

Hint: The constraints that P(x) and P0(x) be continuous at x = 1 mean that we must have P1(1) = P2(1) and  (1). This is because, intuitively, continuous functions are those functions that don’t have any gaps or jumps.

More products