Starting from:

$20

CMPUT466- Assignment 5 Solved

Problem 1.

Consider the training objectiveHow would the hypothesis class capacity, overfitting/underfittting, and bias/variance vary๐ฝ = ||๐‘‹๐‘ค − ๐‘ก||2 subject to ||๐‘ค||2 ≤ ๐ถ for some constant ๐ถ.

according to ๐ถ?

 
Larger
Smaller
Model capacity (large/small?)
_____ ๐ถ
_____   ๐ถ
Overfitting/Underfitting?
__fitting
__fitting
Bias variance (how/low?)
__ bias / __ variance
__ bias / __ variance
Note: No proof is needed Problem 2.

๐‘ก(๐‘š) ∼ ๐‘(๐‘ค๐‘ฅ(๐‘š), σ

๐‘คConsider a one-dimensional linear regression model∼ ๐‘(0, σ 2). Show that the posterior of ๐‘ค is also a Gaussian distribution, i.e.,ฯต2) with a Gaussian prior

๐‘ค|๐‘ฅ(1), ๐‘ก(1), ๐‘ค ···, ๐‘ฅ(๐‘€), ๐‘ก(๐‘€) ∼ ๐‘(µ๐‘๐‘œ๐‘ ๐‘ก, σ๐‘๐‘œ๐‘ ๐‘ก2). Give the formulas for µ๐‘๐‘œ๐‘ ๐‘ก, σ๐‘๐‘œ๐‘ ๐‘ก2.

Note: If a prior has the same formula (but typically with different parameters) as the posterior, itHint: Work with ๐‘ƒ(๐‘ค|๐ท) ∝ ๐‘ƒ(๐‘ค)๐‘ƒ(๐ท|๐‘ค). Do not handle the normalizing term.

is known as a conjugate prior. The above conjugacy also applies to multi-dimensional Gaussian, but the formulas for the mean vector and the covariance matrix will be more complicated.

Problem 3.

equivalent toGive the prior distribution of๐‘™1-penalized mean square loss.๐‘ค for linear regression, such that the max a posteriori estimation is

More products