Starting from:

$35

BIT-Project 1 Warm Solved

task 1.1: acquaint yourself with python for pattern recognition First of all, download the file

whExample.py
from the Google site that accompanies the lecture; second of all, retrieve the file

whData.dat
and run the above script. It should plot sizes vs. weights of students from an earlier course on pattern recognition[1].

Note that two students did not want to disclose their weight. This provides us with an opportunity to deal with missing data. In the plot produced by the script, these data points appear with negative weights (they are outliers).

For starters, try to figure out a way of plotting the data without the outliers, that is, figure out how to plot only those data for which both measurements are positive.

task 1.2: fitting a Normal distribution to 1D data

Now consider only the data on body sizes, i.e. consider the 1D array of data containing the size information.

Compute the mean and standard deviation of this sample, plot the data and a normal distribution characterizing its density. Your result should resemble the following figure:

 

task 1.3: fitting a Weibull distribution to 1D data Download the file

myspace.csv
which contains Google Trends data that indicate how global interest in the query term “myspace” evolved over time. Read the data in the second column of the file and remove leading zeros! Store the remaining entries in an array h = [h1,h2,...,hn] and create an array x = [1,2,...,n].

Now, fit a Weibull distribution to the histogram h(x). The probability density function of the Weibull distribution is given by

 

where κ and α are a shape and scale parameter, respectively. Unlike in the case of the Normal distribution, maximum likelihood estimation of the parameters of the Weibull distribution is more involved.

Given a data sample , the log-likelihood for the parameters of

the Weibull distribution is

 .

Deriving L with respect to α and κ leads to a coupled system of partial differential equations for which there is no closed form solution. Therefore, resort to Newton’s method for simultaneous equations and compute

 

where the entries of the gradient vector and the Hessian matrix amount to

 

  note:

You are given a histogram h(x) where hj counts the number of observations of a value xj. The MLE procedure outlined above assumes that you are given individual observations di. That is, it assumes as input h1 times a value of x1, h2 times a value of x2, etc. In other words, d1 = x1,d2 =

x1,...dh1 = x1,...,dh1+1 = x2,...,dh1+h2 = x2,...

That is, you have to turn the histogram into a (large) set of numbers for the procedure to work. But there also is a more elegant solution, can see and implement it?

Be ambitious! Analyze why the above approach is unnecessarily cumbersome and come up with a much faster approach.

If you initialize κ = 1 and α = 1 and run the estimation procedure for about 20 iterations, then which values do you obtain for κ and λ?

When you plot the histogram and a correspondingly scaled version of the fitted distribution, your result should resemble this figure:

      Google data
 
 
Weibull fit
 

  note:

If you want to impress your professor, then additionally figure out how to use the function scipy.integrate.odeint to solve this task. Again, be ambitious and search the Web for tutorials . . .

task 1.4: drawing unit circles

In the lecture we discussed Lp norms for Rm and saw that, for different p, the corresponding unit spheres may look different. For instance, the following examples show unit circles in R2:

 

Consider the Lp norm for  and plot the corresponding R2 unit circle.

Is this really a norm or not? Discuss why or why not!

task 1.5: estimating the dimension of fractal objects in an image In the lecture we discussed the use of least squares for linear regression; here we consider a neat practical application of this technique. Box counting is a method to determine the fractal dimension of an object in an image. For simplicity, let us focus on square images whose width and height (in pixels) are an integer power of two, for instance

                w = h = 2L = 512           ⇔          L = 9

Given such an image, the procedure involves three main steps:

1.     apply an appropriate binarization procedure to create a binary image in which foreground pixels are set to 1 and background pixels to 0

  →  

2.     specify a set S of scaling factors 0 < si < 1, for instance

 

and, for each si ∈ S, cover the binary image with boxes of size si w× si h and count the number ni of boxes which contain at least one foreground pixel

                  ···

 

3.     plot logni against   and fit a line

 

to this data; the resulting estimate for D is the fractal dimension we are after.

 

In other words, the problem of estimating D is a simple linear regression problem that can of course be tackled using least squares.

Now, implement the box counting method and run it on the two test images tree-2.png and lightning-3.png.

What fractal dimensions do you obtain? Which object has the higher one, the tree or the lightning bolt?


 

More products