Starting from:

$30

HL2027 - project1 - Solved

Miniproject 1
In this mini-project you will implement and test a reconstruction method with a regularisation term for 3D CT data of realistic size (ca. 750MB). More formally: Let x be the image to be reconstructed, and y be the result of the application of the ray transform (forward) operator A to x with additive noise ε, that is y = Ax + ε. The goal is to estimate x as good as possible.

Since this is an ill-posed problem, one usual alternative is to add a regularisation term. In particular, you are going to use the so-called Huber regularisation. This is a Total Variation-type of regularization, where l1−norm is replaced with the Huber norm for convenience. Therefore, instead of solving the least-squares problem

                                                                                ,                                                                       (1)

you have to choose a parameter λ > 0 and solve the regularised problem

                                                                   ,                                                          (2)

where k · k is the norm on the data space (odl.solvers.L2Norm), Hγ is the Huber norm, and ∇(·) is an operator that computes gradient of a function (odl.Gradient). Since in practice images have discrete representation, the gradient is approximated by computing the difference between adjacent pixels/voxels. The Huber norm (odl.solvers.Huber) is defined as

                                                                              Hγ(z) = Xhγ(kzik)                                                                       (3)

i

where hγ(·) is the Huber function

                                                                                                                                 (4)

The Huber function hγ(t) is a differentiable approximation of absolute value |t|. It is used instead of the absolute value because differentiable functions are easier to minimize. Usually γ is chosen very small.

Solving eq. (1) leads to noisy pixel values (because of ill-posedness). As shown in the course, we had to make use of early stopping in iterative methods. The idea of Huber regularisation is to force adjacent pixels to have similar values by penalizing large gradients. In addition, using l1−norm or Huber norm promotes the situation, when most of the gradients are 0. Still, some gradient values are allowed to be large. Thus, this regularization promotes images with large constant regions and sharp borders, e.g. cartoon-like images. Solving eq. (2) amounts to a trade-off between trying to find x that fit the data (e.g., kAx − bk is small), but are not very noisy, that is, Hγ(∇x) cannot be too large.

For a suitable choice of the regularisation parameter λ, there is no need to use early stopping: the issue of semi-convergence is (at least partially) taken care of.

Tasks
Write a code for solving the regularised problem. If you want, you can use any optimization method implemented in ODL (odl.solvers.smooth). In any case, you have to make sure that the method converges by providing appropriate hyperparameters.

Test the code with 2D phantoms with added noise. You can find some in ODL (under odl.phantom)

Choose parameter γ for the Huber functional and motivate your choice.

Now for the regularization parameter: we have to find some λ which gives a good compromise between data-fit and regularisation. You should implement one of the methods from the lecture to find a good choice of λ for a phantom with different levels of noise. For a few values of λ and levels of noise, compute the evolution of the error with the number of iterations by comparing the noiseless image to the reconstructed one. Make sure that you adapt the method to the problem in eq. (2) and describe in your own words what you are doing.

ONLY when the code is working in 2D phantoms, test it in the realistic dataset. This is very important as the running time could be very long depending on the number of iterations you set. Describe the effect of using different λ and describe qualitatively the evolution of the reconstructed image. Show some slices of the result. Avoid using too many iterations as this will be very slow.

Compute a reconstruction using Filtered Back-Projection (odl.tomo.fbp op). Try changing filter type and frequency  scaling. Frequency scaling is the relative frequency cut-off for the filter, a smoother image can be obtained by choosing smaller values. Compare the results to the method used in the previous tasks.

Document your tests and the results of the problem.

Things you might consider
The following are useful dependencies for your code, in particular the last entry provides you with a specific geometry for the data.

import numpy as np import matplotlib . pyplot as plt import odl import odl . contrib . tomo

The ray transform is obtained as follows in ODL:

                            # The geometry         is       already        included         in ODL.

 geometry = odl . contrib . tomo . elekta icon geometry ()

# The reconstruction space reco  space = odl . uniform  discr ([ =112 , =112, 0] ,

[112 ,  112 ,      224] , (448 , 448 , 448) , dtype=’ float32 ’ ) # With     these     data ,     we can compute              the         ray         transform # ( which       is             the         operator A in     the         text        above ).

                          # This        operator        can be        applied        to a           reco space . element ()

  # and returns a data  space . element () raytrafo = odl . tomo . RayTransform( reco space , geometry) # The data space is the range of the ray transform . data space = ray  trafo . range

The most simple way to minimize the objective function Q(x) in eq. (2) is to use the steepest descent (gradient descent) method:

1: x0 arbitrary, r0 := −∇Q(x0), t - step size.

2: for k := 0,1, ... do

3: xk+1 := xk + trk 4: rk+1 := −∇Q(xk+1)

5: end for

Note: the Landweber’s method is a special case of the steepest descent.

Optionally, you may improve the steepest descent by implementing a line search:

1: x0 arbitrary, r0 := −∇Q(x0), t - step size.

2: for k := 0,1, ... do

               3:               xk+1 := minima of Q on half-line t 7→ xk + trk

               4:             rk+1 := −∇Q(xk+1)

5: end for

The steepest descent is known to converge if the step size 0 < t < 2 ∗ k∇Qk.

In ODL multiplication of operators f ∗g implements composition f(g(·)). For example, the operator kAx − bk2 can be defined in ODL as follows:

Q = odl . solvers . L2NormSquared( data  space ). translated (b) * ray  trafo

Also you can easily sum operators and multiply by a scalar. For details address documentation https://odlgroup.github.io/odl/guide/operator_guide.html Use this to define a regularization term as in eq. (2).

The operator ∇Q can be obtained in ODL by using .gradient property of an operator.

The norm of an operator can be estimated using the property .norm(estimate=True).

If it you have a complex operator you might want to use properties (https://en. wikipedia.org/wiki/Operator_norm) to estimate it’s norm by parts.

Document your code by describing what it does. Structure the functions you use so that they accept general input. Give a meaning to each parameter of your function.

The realistic data is available in the file noisy data.npy where you find some data which fits the data space from above. Use np.load(’/hl2027/noisy data.npy’) to get the data and data space.element to go from a numpy array to an ODL element. The file will be also available in Canvas for those who prefer not to use the server. Compute a reconstruction of the original data. Show some slices of the result. Avoid using too many iterations as this will be very slow.


More products