Starting from:

$29.99

CS726 Homework #4 Solution

Please typeset or write your solutions neatly! If we cannot read it, we cannot grade it.
Note: You can use the results we have proved in class – no need to prove them again.
Q 1. Let f : Rn → R be an L-smooth function and let X ⊆ Rn be a closed, convex, and nonempty set. Recall the definition of the gradient mapping: , where PX(·) denotes the Euclidean projection onto X. Prove that Gη(·) is (2η + L)-Lipschitz continuous. [20pts]
Q 2. Consider a constrained minimization problem minu∈X f(x), where f : Rn → R is L-smooth and X ⊆ Rn is a hyper-rectangle: X = {x ∈ Rn : xi ∈ [ai,bi],∀i ∈ {1,2,...,n}}, where a,b ∈ Rn are vectors that satisfy ∀i ∈ {1,...,n} : ai < bi.
(i) What is PX(x)? Write it in closed form. [10pts]
(ii) Define ∆i(x) := ∇if(x)ei, where ei is the ith standard basis vector (except for its ith coordinate, all coordinates are equal to zero; the ith coordinate equals one). Define:
T(x,i) := argmin .
u
Express the gradient mapping GL(x) as a function of L, x, and T(x,i), i ∈ {1,...,n}. [10pts]
(iii) Consider the following method that starts from some initial point x0 ∈ X and updates its iterates xk for k ≥ 0 as: i∗k = argmax|(GL(xk))i|
1≤i≤n
x .
Prove the following sufficient descent property for this algorithm:
.
What is the largest α for which this property holds? What can you say about convergence of this method if f is bounded below by some f¯? [10pts]
Q 3. Consider a constrained minimization problem minu∈X f(x), where f : Rn → R is convex and M-Lipschitz, and X ⊆ Rn is closed, convex, and nonempty. Assume that there exists x∗ ∈ X that minimizes f. Recall from the class that we use gx to denote an arbitrary subgradient of f at x ∈ X. Distances within the set X are measured using some norm k · k (e.g., an `p norm for p ≥ 1). Norm that is dual to k · k is denoted by k · k∗ and is defined by: kzk∗ = maxu:kuk≤1 hz,ui. By the definition of a dual norm, you can immediately deduce that:
(∀u,z) : hu,zi ≤ kukkzk∗.
Note also that since f is M-Lipschitz w.r.t. the norm k · k, we have, ∀x,y and all gx ∈ ∂f(x):
|f(x) − f(y)| ≤ Mkx − yk and kgxk∗ ≤ M.
In this part, you will analyze the convergence of the Mirror Descent algorithm, defined by:
xk+1 = argmin{ak hgxk,u − xki + Dψ(u,xk)}, (MD)
u∈X
1
where, as in previous assignments, Dψ(x,y) denotes the Bregman divergence between x and y w.r.t. a continuouslydifferentiable function ψ.
Assume that ψ is 1-strongly convex w.r.t. k · k, namely:
.
(i) Using the first-order optimality in the definition of xk+1 and the three-point identity of Bregman divergences we proved in HW #2, show that:
. [15pts]
(ii) Use Part (i) to prove that:
. [15pts]
(iii) Use Part (ii) to prove that, ∀k ≥ 0 :
,
where xk Ak . [10pts]
(iv) Discuss how you would choose for (MD) to converge as fast as possible, and provide the convergence bound (a bound on f(xoutk ) − f(x∗)) for your choice(s) of . [10pts]
2

More products