Starting from:

$25

TDT4171- Assignment 2 Probabilistic Reasoning over Time Solved

1       Hidden Markov Model
 
Some tourists are curious if there are fish in a nearby lake. They are unable to observe whether this is true or not by staring into the lake. However, they can observe whether or not there are birds nearby that a↵ect the presence of fish. Based on their instincts, the tourists propose the following domain theory:

1.   The prior probability of fish nearby (that is, without any observation) is 0.5.

2.   The probability of fish nearby on day t is 0.8 given there are fish nearby on day t 1, and 0.3 if not.

3.   The probability of birds nearby on day t if there are fish nearby on the same day is 0.75, and 0.2 if not.

The following evidence is given

•   e1 = {birds nearby}

•   e2 = {birds nearby}

•   e3 = {no birds nearby}
•   e4 = {birds nearby}

•   e5 = {no birds nearby}

•   e6 = {birds nearby}
We will denote the state variable for fish nearby on day t by Xt.

Instructions
Use programming to solve all exercises in this section involving computation. The results need to be extracted from the program and well documented in a human-readable format that is easy to understand in a PDF file. Additionally, write a few sentences to give the results some context. The results can, for example, be plotted using Matplotlib [1] to give a more straightforward overview.

The code must be runnable without any modifications after delivery. Moreover, the code must be readable and contain comments explaining it. We recommend that Python with the package NumPy [2] be used for the programming exercises. It is not allowed to use libraries, such as Scikit-learn [3] to solve the tasks.

Problems
(a)   Formulate the information given above as a hidden Markov model, and provide the complete probability tables for the model.

(b)   Compute

                                                                      P(Xt|e1:t), for t = 1,...,6.                                                       (1)

What kind of operation is this (filtering, prediction, smoothing, likelihood of the evidence, or most likely sequence)? Describe in words what kind of information this operation provides us.

(c)    Compute

                                                                     P(Xt|e1:6), for t = 7,...,30.                                                      (2)

What kind of operation is this (filtering, prediction, smoothing, likelihood of the evidence, or most likely sequence)? Describe in words what kind of information this operation provides us. What happens to the distribution in Equation (2) as t increases?

(d)   Compute

                                                                      P(Xt|e1:6), for t = 0,...,5.                                                       (3)

What kind of operation is this (filtering, prediction, smoothing, likelihood of the evidence, or most likely sequence)? Describe in words what kind of information this operation provides us.

(e)   Compute

                                                          argmax P(x1,...,xt 1,Xt|e1:t), for t = 1,...,6.                                          (4)

x1,...,xt 1

What kind of operation is this (filtering, prediction, smoothing, likelihood of the evidence, or most likely sequence)? Describe in words what kind of information this operation provides us.


2       Dynamic Bayesian Network
 
Some tourists visiting a cabin are interested in finding out if there are animals nearby. They can observe outside of their window every day whether there are animal tracks and whether the food they placed outside is gone. Furthermore, they believe that the animal tracks and the food placed outside are conditionally independent given animals nearby (AnimalTrackst ?? FoodGonet | AnimalsNearbyt). Based on gut feelings, the tourists provide the following domain theory:

1.   The prior probability of animals nearby (that is, without any observation) is 0.7.

2.   The probability of animals nearby on day t is 0.8 given that there were animals nearby on day t 1, and 0.3 if not.

3.   The probability of animal tracks on day t if there are animals nearby on the same day is 0.7, and 0.2 if not.

4.   The probability of the food gone on day t if there are animals nearby on the same day is 0.3, and 0.1 if not.

The following evidence is given

•   e1 = {animal tracks,food gone}  • e3 = {no animal tracks,food not gone}

•   e2 = {no animal tracks,food gone}            • e4 = {animal tracks,food not gone}

We will denote the state variable for animals nearby on day t by Xt.

Instructions
Solve by hand all exercises in this section involving computation, not by programming. The results must be accompanied by steps to solve them and justifications, not only the final results.

Problems
(a) Formulate the information given above as a dynamic Bayesian network and provide the complete probability tables for the model.

(b) Compute
 
P(Xt|e1:t), for t = 1,2,3,4.

(c) Compute
(5)
P(Xt|e1:4), for t = 5,6,7,8.
(6)
(d) By forecasting further and further into the future, you should see that the probability converges towards a fixed point. Verify that

t!1 | h i lim P(Xt e1:4) = 0.6,0.4 .

(e) Compute
(7)
P(Xt|e1:4), for t = 0,1,2,3.
(8)

More products