Starting from:

$25

CENG477-Assignment 1 Solved

CENG477

Introduction to Computer Graphics

Assignment 1 - Ray Tracing

1             Objectives
Ray tracing is a fundamental rendering algorithm. It is commonly used for animations and architectural simulations, in which the quality of the created images is more important than the time it takes to create them. In this assignment, you are going to implement a basic ray tracer that simulates the propagation of light in real world.

Keywords: ray tracing, light propagation, geometric optics, ray-object intersections, surface shading

2             Specifications
1.    You should name your executable as “raytracer”.

2.    Your executable will take an XML scene file as argument (e.g. “scene.xml”). A parser will be given to you, so that you do not have to worry about parsing the file yourself. The format of the file will be explained in Section 3. You should be able to run your executable via command “./raytracer scene.xml”.

3.    You will save the resulting images in the PPM format. A PPM writer will be given to you so you don’t have to worry about writing this file yourself. Interested readers can find the details of the PPM format at: http://netpbm.sourceforge.net/doc/ppm.html

4.    The scene file may contain multiple camera configurations. You should render as many images as the number of cameras. The output filenames for each camera is also specified in the XML file.

5.     You will have at most 30 minutes to render scenes for each input file on inek machines. Programs exceeding this limit will be killed and will be assumed to have produced no image. Note that horse and mug scene is exempt from the 30-minute rule as it has a high object count, and it is optional to render.

6.    You should use Blinn-Phong shading model for the specular shading computations.

7.    You will implement two types of light sources: point and ambient. There may be multiple point light sources and a single ambient light. The values of these lights will be given as (R, G, B) color triplets that are not restricted to [0, 255] range (however, they cannot be negative as negative light does not make sense). Any pixel color value that is calculated by shading computations and is greater than 255 must be clamped to 255 and rounded to the nearest integer before writing it to the output PPM file.

8.    Point lights will be defined by their intensity (power per unit solid angle). The irradiance due to such a light source falls off as inversely proportional to the squared distance from the light source. To simulate this effect, you must compute the irradiance at a distance of d from a point light as:
 

where I is the original light intensity (a triplet of RGB values given in the XML file) and E(d) is the irradiance at distance d from the light source.

9.    Back-face culling is a method used to accelerate the ray - scene intersections by not computing intersections with triangles whose normals are pointing away from the camera. Its implementation is simple and done by calculating the dot product of the ray direction with the normal vector of the triangle. If the sign of the result is positive, then that triangle is ignored. Note that shadow rays should not use back-face culling. In this homework, back-face culling implementation is optional.

3             Scene File
The scene file will be formatted as an XML file (see Section 7). In this file, there may be different numbers of materials, vertices, triangles, spheres, lights, and cameras. Each of these are defined by a unique integer ID. The IDs for each type of element will start from one and increase sequentially. Also notice that, in the XML file:

· Every number represented by X, Y and Z is a floating point number.

· Every number represented by R, G, B, and N is an integer.

Explanations for each XML tag are provided below:

•    BackgroundColor: Specifies the R, G, B values of the background. If a ray sent through a pixel does not hit any object, the pixel will be set to this color. Only applicable for primary rays sent through pixels.

•    ShadowRayEpsilon: When a ray hits an object, you are going to send a shadow ray from the intersection point to each point light source to decide whether the hit point is in shadow or not. Due to floating point precision errors, sometimes the shadow ray hits the same object even if it should not. Therefore, you must use this small ShadowRayEpsilon value, which

is a floating point number, to move the intersection point a bit further from the hit point in the direction of the hit point’s normal vector so that the shadow ray does not intersect with the same object again. Note that ShadowRayEpsilon value can also be used to avoid self-intersections while casting reflection rays from the intersection point.

•    MaxRecursionDepth: Specifies how many bounces the ray makes off of mirror-like objects. Applicable only when a material is of type mirror. Primary rays are assumed to start with zero bounce count.

•    Camera:

·Position parameters define the coordinates of the camera.

·Gaze parameters define the direction that the camera is looking at. You must assume that the Gaze vector of the camera is always perpendicular to the image plane.

·Up parameters define the up vector of the camera.

·NearPlane attribute defines the coordinates of the image plane with Left, Right, Bottom, Top floating point parameters, respectively.

·NearDistance defines the distance of the image plane to the camera.

·ImageResolution defines the resolution of the image with Width and Height integer parameters, respectively.

·ImageName defines the name of the output file.

Cameras defined in this homework will be right-handed. The mapping of Up and Gaze vectors to the camera terminology used in the course slides is given as:

Up = v

Gaze = −w u = v × w

•    AmbientLight: is defined by just an X, Y, Z radiance triplet. This is the amount of light received by each object even when the object is in shadow. Color channel order of this triplet is RGB.

•    PointLight: is defined by a position and an intensity, which are all floating point numbers. Color channel order of intensity is RGB.

•    Material: A material can be defined with ambient, diffuse, specular, and mirror reflectance properties for each color channel. The values are floats between 0.0 and 1.0, and color channel order is RGB.

PhongExponent defines the specularity exponent in Blinn-Phong shading.

MirrorReflectance represents the degree of the mirrorness of the material. If the material is of type mirror, you must cast new rays and scale the resulting color value with the MirrorReflectance parameters. The attribute type=“mirror” is provided only for the materials which has non-zero MirrorReflectance.

•    VertexData: Each line contains a vertex whose x, y, and z coordinates are given as floating point values, respectively. The first vertex’s ID is 1.

•    Mesh: Each mesh is composed of several faces. A face is actually a triangle which contains three vertices. When defining a mesh, each line in Faces attribute defines a triangle. Therefore, each line is composed of three integer vertex IDs given in counter-clockwise order (see Triangle explanation below). Material attribute represents the material ID of the mesh.

•    Triangle: A triangle is represented by Material and Indices attributes. Material attribute represents the material ID. Indices are the integer vertex IDs of the vertices that construct the triangle. Vertices are given in counter-clockwise order, which is important when you want to calculate the normals of the triangles. Counter-clockwise order means that if you close your right-hand following the order of the vertices, your thumb points in the direction of the surface normal.

•    Sphere: A sphere is represented by Material, Center, and Radius attributes. Material attribute represents the material ID. Center represents the vertex ID of the point which is the center of the sphere. Radius attribute is the radius of the sphere.

4             Hints & Tips
1.    Start early. It takes time to get a ray tracer up and running.

2.    You may use the -O3 option while compiling your code for optimization. This itself will provide a huge performance improvement.

3.    Try to pre-compute anything that would be used multiple times and save these results. For example, you can pre-compute the normals of the triangles and save it in your triangle data structure when you read the input file.

4.    If you see generally correct but noisy results (black dots), it is most likely that there is a floating point precision error (you may be checking for exact equality of two FP numbers instead of checking if they are within a small epsilon).

5.    For debugging purposes, consider using low resolution images. Also it may be necessary to debug your code by tracing what happens for a single pixel (always simplify the problem when debugging).

6.    We will not test your code with strange configurations such as the camera being inside a sphere, a point light being inside a sphere, or with objects in front of the image plane. If you doubt whether a special case in addition to these will be tested, you may ask this on ODTUClass.

More products