Starting from:

$25

MA514 - HW 1 - Solved

Exercise 1.1
(a) The product must be split up into two lines to fit on the page. Note that by ’delete column 1’ in step 7 we interpret this to mean that the first column of the matrix should be removed entirely in contrast with the possible interpretation that the first column of the matrix should be zeroed out.

 

Exercise 1.2
(a) We wish to write a matrix equation of the form:

  .

We find individual force equations of the system to be:

f1 = k12(x2 − x1 − l12)

f2 = k23(x3 − x2 − l23) f3 = k34(x4 − x3 − l34) f4 = 0

We can collect this information into an intermediate step in writing the matrix equation given by:

 

It is only possible to rewrite this equation in the form f = Kx by using the fact that x1,x2,x3,x4 6= 0 and factoring these values out to arrive at:

 

Exercise 1.3
Let R ∈ Cm×m by an upper triangular matrix, i.e., rin = 0 if i > n and suppose that R is nonsingular. We will show that R−1 is also upper triangular. If we denote the i,n entry of R−1 by rin−1 this means we must show that rin−1 = 0 if i > n.

Since R is nonsingular, the columns of R are linearly independent. In particular if we consider the first n ≤ m columns of R, which we denote by R1,R2,...,Rn, this subcollection of columns of R must also be linearly independent. Since rij = 0 for i > j, the set {R1,...,Rn} spans the vector space {y ∈ Cm : yi = 0 if i > n} and because R1,...,Rn are linearly independent, they form a basis for this vector space. Now consider the product

  .

Note that the nth (with n ≤ m) column of I is the canonical unit vector en and the equation above shows that en is the results of taking the linear combination of R1,...,Rn,..,Rm with the coefficients as the elements of the nth column of R−1. That is, using equation 1.8, we have

m            en = Xrin−1Ri = r1−n1R1 + r2−n1R2 + ...rnn−1Rn + ... + rmn−1Rm .

i=1

Since R is nonsingular, the coefficients rin−1 in the equation above are uniquely defined by Theorem 1.2. But since en has all zero entries for any entry beyond the nth position (if n = m this reasoning will still apply but there are no zero entries below the mth position), we see that en is in the span of the columns {R1,...,Rn}. This means that

 

where the coefficients are the same as in the previous equation because if other coefficients were possible this would contradict the fact that the columns of R (and therefore the first n columns) are linearly independent. Thus,

n              m

en = Xrin−1Ri = Xrin−1Ri

                                                               i=1                                 i=1

which implies that rin−1 = 0 if i > n. But since n was an arbitrary integer such that 1 ≤ n ≤ m, we have shown by the definition of an upper triangular matrix that R−1 is upper triangular.

Exercise 1.4
Let f1,...,f8 : [0,8] → C be a set of functions with the property that for any choice of numbers d1,...,d8 ∈ C there exists a set of coefficients c1,...,c8 ∈ C such that

8

X

cjfj(i) = di      i = 1,...,8. j=1

(a) We will show that choosing d1,...,d8 will determine c1,...,c8 uniquely.

Fix an arbitrary selection d1,...,d8. By hypothesis we have the system of equations

d1 = c1f1(1) + c2f2(1) + ... + c8f8(1) d2 = c1f1(2) + c2f2(2) + ... + c8f8(2)

...

d8 = c1f1(8) + c2f2(8) + ... + c8f8(8)

which gives the matrix equation

 

Using the notation defined above, our hypothesis guarantees: For any d ∈ C8, there exists an element c ∈ C8 such that d = Fc. But this statement means that for the matrix F ∈ C8×8, range(F) = C8. By Theorem 1.3, this is equivalent to the statement that rank(F) = 8, i.e., F is a full rank 8 × 8 matrix. Then by Theorem 1.2, F maps two distinct vectors to the same vector. In other words, the vector c is uniquely determined and therefore we conclude that the elements of c, which are c1,...,c8 are uniquely determined by d1,...,d8, which are the elements of d.

Exercise 2.2
The Pythagorean theorem asserts that for a set of n orthogonal vectors {xi},

  .

(a)   Prove this in the case n = 2 by an explicit computation of kx1+x2k2.

||x1 + x2||2 = (x1 + x2)∗(x1 + x2)

                                      = x∗1(x1 + x2) + x∗2(x1 + x2)           (by bilinearity)

                                           (by bilinearity)

                                           (by orthogonality)

                                    = ||x1||2 + ||x2||2              (by definition of Euclidean length)

(b)  For n = 1, the assertion is immediate seen to hold since this becomes

||x1||2 = ||x1||2 .

(Note that if the base case in an inductive proof was supposed to be the case n = 2 then we have also already established this in part a).

Assume the inductive hypothesis. That is, assume it is true that for some n ∈ N,

n n kXxik2 = Xkxik2 .

                                                            i=1                           i=1

Now consider the case for n+1 orthogonal vectors. We will use the fact that   is actually just a vector (the sume of vectors is a vector) and that the vector is orthogonal to the vector xn+1 since

  .

This means we may apply the result of part a to the two vectors  and xn+1:

                       (by part a)

i=1

n

                                         = X||xi||2 + ||xn+1||2                   (by induction hypothesis)

i=1

n+1

= X||xi||2

i=1

Exercise 2.3
 

Let A ∈ Cm×m be hermitian, i.e. A = A∗ = AT = (A¯)T. An eigenvector of A is a nonzero vector x ∈ Cm such that Ax = λx for some λ ∈ C, the corresponding eigenvalue.

(a)                       Prove that all eigenvalues of A are real.

Let (λ,x) be an eigenvalue, eigenvector pair for the hermitian matrix A ∈ Cm×m. We have:

                             (using bilinearity and eqn 2.4)

 

                                 x∗A∗x = λx∗x            (since (x∗)∗ = x )

                                             (A = A∗)

 

Since x∗λx = λx∗x, we see that λx∗x = λx∗x and since x 6= 0 it follows

 

that x ∗ x > 0. Therefore it must be the case that λ = λ, which means that λ ∈ R. Since λ was an arbitrary eigenvalue, we conclude that all eigenvalues of A are real.

(b)                      Prove that if x and y are eigenvectors corresponding to distinct eigenvalues, then x and y are orthogonal.

Let x,y be eigenvectors of the hermitian matrix A ∈ Cm×m corresponding to eigenvalues λx,λy so that Ax = λxx and Ay = λyy. We have:

  by part a

Suppose that x,y are not orthogonal so that y∗x 6= 0. Then this last equality would show that λx = λy, which is a contradiction. Therefore, we must conclude that y∗x = 0 meaning that x and y are orthogonal.

Exercise 2.4
What can be said about the eigenvalues of a unitary matrix?

Response: If λ is an eigenvalue of the unitary matrix Q ∈ Cm×m then |λ| = 1.

Proof: Since Q is unitary, we have Q∗Q = QQ∗ = I. We have Qx = λx which means that ||Qx|| = ||λx||. But by equation 2.10 we also have ||Qx|| = ||x||. Also, since ||λx|| = |λx|| it follows that ||x|| = |λ|||x||. Since x is an eigenvector, x 6= 0 and so ||x|| > 0. Dividing through by ||x|| we arrive at 1 = |λ|.

Exercise 2.5
Let S ∈ Cm×m be skew-hermitian, i.e., S∗ = −S.

(a) Show by Exercise 2.3 that the eigenvalues of S are pure imaginary.

Let λ ∈ C be an eigenvalue of the skew-hermitian matrix S with λ = a+bi for some a,b ∈ R with corresponding eigenvector x. We will show that a = 0.

 

 

Therefore λx∗x = −λx∗x. Since x∗x = ||x||2 6= 0, this means that

 

a + bi = λ = −λ = −(a − bi)

a + bi = −a + bi =⇒ a = −a =⇒ a = 0 .

Therefore, we see that λ = bi is pure imaginary, as we wanted to show.

More products