Starting from:

$30

CS590 homework 1 Solved

1.      The function 𝑓1(π‘₯0, β„Ž) = sin(π‘₯0 + β„Ž) − sin⁑(π‘₯0) can be transformed into another form, 𝑓2(π‘₯0, β„Ž) using the trigonometric formula given below:

                                                                                                               πœ™ + πœ“           πœ™ − πœ“

sin(πœ™) − sin(πœ“) = 2 cos ( ) sin ( )

                                                                                                                     2                   2

Thus, 𝑓1and 𝑓2 have the same values, in exact arithmetic, for any given argument values π‘₯0 and β„Ž.

a.      Derive 𝑓2(π‘₯0, β„Ž).

b.      Suggest a formula that avoids cancellation errors for computing the approximation (𝑓(π‘₯0 + β„Ž) − 𝑓(π‘₯0))/β„Ž to the derivative of 𝑓(π‘₯) = sin(π‘₯) at π‘₯ = π‘₯0. Write a Python program that implements your formula and computes an approximation of 𝑓′(1.2), for β„Ž =⁑1e-20, 1e-19, …, 1.

 

2.      Consider the linear system  

                                                                                      π‘Ž    π‘     π‘₯         1

                                                                                    (𝑏     π‘Ž) (𝑦) = (0)

with π‘Ž, 𝑏 0.  

a.      If π‘Ž ≈ 𝑏, what is the numerical difficulty in solving this linear system?

b.      Suggest a numerically stable formula for computing 𝑧 = π‘₯ + 𝑦 given π‘Ž and 𝑏.

c.       Determine whether the following statement is true or false, and explain why:

“When π‘Ž ≈ 𝑏, the problem of solving the linear system is ill-conditioned but the problem of computing π‘₯ + 𝑦 is not ill-conditioned.”

3.      With exact rounding, we know that each elementary operation has a relative error which is bounded in terms of the rounding unit πœ‚; e.g., for two floating point numbers π‘₯ and 𝑦, 𝑓𝑙(π‘₯ + 𝑦) = (π‘₯ + 𝑦)(1 + πœ–), |πœ–| ≤ πœ‚. But is this true also for elementary functions such as sin, ln, and exponentiation?  

Consider exponentiation, which is performed according to the formula below.

π‘₯𝑦 = 𝑒𝑦𝑙𝑛(π‘₯)⁑(π‘Žπ‘ π‘ π‘’π‘šπ‘–π‘›π‘”β‘π‘₯ 0)

Estimate the relative error in calculating π‘₯𝑦 in floating point, assuming 𝑓𝑙(𝑙𝑛𝑧) = (𝑙𝑛𝑧)(1 + πœ–), |πœ–| ≤ πœ‚, and that everything else is exact. Show that the sort of bound we have for elementary operations and for ln does not hold for exponentiation when π‘₯𝑦 is very large.  

4.      You are required by a computer manufacturer to write a library function for a given floating point system to find the cube root 𝑦1/3 of any given positive number 𝑦. Any such relevant floating-point number can be represented as 𝑦 = π‘Ž ∗ 2𝑒, where π‘Ž normalized fraction (0.5 ≤ π‘Ž < 1) and 𝑒 is an integer exponent. This library function must be very efficient and it should always work. For efficiency purposes it makes sense to store some useful constants ahead of computation time, e.g., the constants 21/3, 2/3 and π‘Ž/3, and should these prove useful.

a.      Show how 𝑦1/3 can be obtained, once π‘Ž1/3 has been calculated for the corresponding fraction, in at most five additional flops.  

b.      Derive the corresponding Newton iteration. What is the flop count per iteration?  

c.       How would you choose an initial approximation? Roughly how many iterations are needed? (The machine rounding unit is 2−52.)  

 

 

More products