Starting from:

$30

MTH 373/573 Homework 3 Solved

Problem 1: Newton’s method in 1 dimension                                    
(a)    Newton’s method for solving a scalar nonlinear equation 𝑓 (π‘₯) = 0 requires computation of the derivative of 𝑓 at each iteration. Suppose that we instead replace the true derivative with a constant value 𝑑, that is, we use the iteration scheme:

π‘₯π‘˜+1 = π‘₯π‘˜ − 𝑓 (π‘₯π‘˜)/𝑑.

i)       Under what conditions on the value of 𝑑 will this scheme be locally convergent?

ii)           What will be the convergence rate in general?

iii)         Is there any value of 𝑑 that would still yield quadratic convergence?

(b)   Write a Python function to implement Newton’s method in 1d. Your function should take as inputs the 1d function, its derivative, an initial guess and a tolerance for checking convergence. Use this function to find the roots of the following functions using Newton’s method with the provided initial guess.

i)             π‘₯2 − 1 = 0, π‘₯0 = 106. ii) (π‘₯ − 1)4 = 0, π‘₯0 = 10. iii) π‘₯ − cos π‘₯ = 0, π‘₯0 = 1.

What is the observed convergence rate of Newton’s method for each problem? Why does each problem exhibit this convergence rate?

You can compute the rate by using the fact that the error at each iteration π‘’π‘˜ = π‘₯∗ − π‘₯π‘˜ satisfies |π‘’π‘˜+1| / |π‘’π‘˜|π‘Ÿ = 𝐢. Thus, |π‘’π‘˜+1| / |π‘’π‘˜|π‘Ÿ = |π‘’π‘˜| / |π‘’π‘˜−1|π‘Ÿ from which you should derive an equation for obtaining π‘Ÿ for each iteration. You can use the computed solution in the last iteration as the true solution π‘₯∗ and the last of these computed rates to be the rate of convergence.

Problem 2: Newton’s method for a system                                           
Consider the conversion formula from spherical to Cartesian coordinates:

π‘₯ = π‘Ÿ sin πœƒ cos πœ‘

𝑦 = π‘Ÿ sin πœƒ sin πœ‘

𝑧 = π‘Ÿ cos πœƒ

(a)    Write a function newton(f, J, x0, tol=1e-12, maxit=500) that implements Newton’s method generically, with a starting guess of x0.

f is a function that accepts a numpy array π‘₯ of the current state and returns the function value 𝑓 (π‘₯) as another numpy array. Write f so that 𝑓 (π‘₯) = 0 at the sought solution. J is the Jacobian of f, so given a numpy array x, it returns a Jacobian matrix of the appropriate dimensions.

Terminate the iteration once either the residual is smaller than tol or maxit iterations have been performed.

(b)   Write a function that, given π‘₯, 𝑦, and 𝑧, finds π‘Ÿ, πœƒ and πœ‘, using the Newton implementation from part (a). Do not reimplement Newton’s method, simply pass in appropriate functions f and J to newton from part (a).

Find a starting guess that leads to convergence in your experiments.

The routine should accept π‘₯, 𝑦, 𝑧 in one input vector and return π‘Ÿ, πœƒ and πœ™ in one output vector.

(c)    Test your work from (a) and (b) by drawing 10 random vectors from π‘₯∗ ∈ ℝ3 using np.random.randn and finding their spherical coordinates.

For each case, print the final relative residual β€–π‘₯ − π‘₯∗β€–2 / β€–π‘₯∗β€–2, where π‘₯ are the Cartesian coordinates corresponding to the spherical coordinates returned by your routine.

𝑇

Also compare the polar coordinates 𝑀̃ = [π‘ŸΜƒ πœƒΜƒ πœ‘Μƒ] output by your routine with the true values computed using the formulas:

 

π‘Ÿ = √π‘₯2 + 𝑦2 + 𝑧2

πœƒ = arccos (𝑧 )

π‘Ÿ

𝑦

                                                                                               πœ‘ = arctan (  )

π‘₯

𝑇

Let 𝑀 = [π‘Ÿ πœƒ πœ‘] . For each example, print the relative error ‖𝑀 − 𝑀̃‖2 / ‖𝑀‖2 in your computed values. Is this small whenever the residual is small? Why/why not?

Hints:

•    Read the documentation for and use np.arctan2 in implementing the formulas to find π‘Ÿ, πœƒ, and πœ™.

•    It is instructive to watch the conditioning of the Jacobian. A singular Jacobian will cause the method to break down or compute inaccurate results. Some starting guesses are more prone to this than others.

Problem 3: Chebyshev polynomials, Vandermonde matrices 50 points
(a)     A function 𝐹𝑛(𝑑) is known to satisfy the Chebyshev three-term recurrence if the following properties hold:

                                                                                      𝐹0(𝑑)  =   1

                                                                                      𝐹1(𝑑)  =  π‘‘

                                                                                 πΉπ‘›+1(𝑑)  = 2𝑑𝐹𝑛(𝑑) − 𝐹𝑛−1(𝑑)

Show that the function

𝐹𝑛(𝑑) = cos(𝑛 arccos(𝑑))

satisfies the Chebyshev three-term recurrence. Hint: Remember the addition formula for cosines:

cos(𝛼 + 𝛽) = cos(𝛼) cos(𝛽) − sin(𝛼) sin(𝛽).

How would this apply to cos((𝑛 + 1) arccos(𝑑))?

(b)    Deduce that 𝐹𝑛(𝑑) is a polynomial. This is called the Chebyshev polynomial.

(c)     The generalized Vandermonde matrix 𝑉 for a set of points π‘₯1, … , π‘₯𝑛 with a set of functions

πœ™1, … , πœ™π‘› is given by

𝑉𝑖𝑗 = πœ™π‘—(π‘₯𝑖).

Notice that this matrix captures an essential bit of interpolation: it maps coefficients with respect to the functions (πœ™π‘—) to point values at the points (π‘₯𝑖):

Clearly, 𝑉 −1 describes the reverse process, and the conditioning of 𝑉 can tell us quite a bit about how well-behaved of an operation interpolation is with respect to the given sets of functions and nodes.

Perform the following steps:

i)      Construct 𝑛 × π‘› generalized Vandermonde matrices for 𝑛 = 5, 10, 15, … , 100.

ii)    Plot the condition number of the matrix with respect to 𝑛. for each of the cases below:

i.         Equispaced nodes π‘₯𝑖 in the interval [−1, 1] (endpoints included), with the monomials πœ™π‘—(π‘₯) = π‘₯𝑗.

ii.      Chebyshev nodes in the range [−1, 1] are given by

π‘₯𝑖 = cos ( πœ‹) ,

with the monomials.

iii.    Equispaced nodes π‘₯𝑖 in the interval [−1, 1] (again, endpoint included), with the Chebyshev polynomials 𝐹𝑗.

iv.     Chebyshev nodes in the interval [−1, 1] (see equation above) with the Chebyshev polynomials 𝐹𝑗.

Your output should include one (clearly labeled) plot as described above showing (and comparing) the behavior of the condition number for all four cases.

Hint: Use matplotlib.pyplot.semilogy() to plot the values with a linear scale in 𝑛 on the π‘₯ axis and a logarithmic scale for the condition number on the 𝑦 axis.

(d)    Which of the combinations performs best?

Problem 4: Interpolation, Newton and Cubic Spline                           
(a)    The following formula is called the divideddifferences approach to computing the coefficients of an interpolating polynomial using Newton’s polynomials as a basis:

𝑓 [𝑑1, 𝑑2, … , π‘‘π‘˜] ≔  π‘“ [𝑑2, 𝑑3, … , π‘‘π‘˜] −π‘‘π‘˜ −𝑓 [𝑑𝑑11, 𝑑2, … , π‘‘π‘˜−1]

𝑓 [𝑑𝑗] ≔ 𝑓 (𝑑𝑗)

Prove that, using mathematical induction, that indeed this approach gives the coefficient of the 𝑗th basis function using the Newton interpolation polynomial.

(b)    Given the three data points (−1, 1), (0, 0), (1, 1), determine the interpolating polynomial of degree 2 (using hand calculations alone and showing all necessary steps) using:

i) Using the monomial basis. ii) Using the Lagrange basis. iii) Using the Newton basis.

iv) Show that all three representations give the same polynomial.

(c)    Consider interpolating a given set of data points (π‘₯𝑖, 𝑦𝑖), 𝑖 = 1, … , 𝑛 using natural cubic splines. Write a code to set up and solve the linear system that performs this interpolation. Plot the resulting cubic spline along with the data. For the data, pick 𝑛 = 6 random points (π‘₯𝑖)𝑛𝑖=1 on [0, 1) with values (𝑦𝑖)𝑛𝑖=1 in [0, 1).

Hint: Make sure you sort the π‘₯𝑖’s after you draw the random numbers and before you start constructing the spline, to avoid confusing your spline construction code.

SubmissionNotes
1.     The answer to all theoretical problems, output of Python code for computational problems including figures, tables and accompanying analyses should be provided in a single PDF file along with all your code files.

2.     You can typeset (please consider using LATEX if you choose to do so), or write by hand and scan/take photographs of solutions for theoretical questions. For scanning or photography, please put in the extra effort and provide a single PDF file of your submission.

3.     For Problem 2, submit your code in one file: problem_2.py. You can use the Python file provided as a boilerplate.

4.     For Problem 3, submit your code in one file again: problem_3.py. You will of course, copypaste the functions written in problem_2.py for Gaussian elimination without and with partial pivoting into this file.

More products