Starting from:

$30

CENG477- Assignment 1 Solved

Objectives
Ray tracing is a fundamental rendering algorithm. It is commonly used in applications in which the quality of the images are more important than the time it takes to create them such as architectural simulations and animations. You are going to implement a very basic ray tracer simulating the propagation of the light in vacuum in this assignment.

Specifications
•    A template for implementation of a basic ray tracer is provided to you. This template consists of header files of the classes you will need, declarations of the public methods and public variables, implementation of an XML parser, and a ppm file saver. Explanation of the files are not provided here. Take a look at the files before starting, read the comments carefully, and try to understand how you can use the provided utilites. You are allowed to add your own variables and methods to only private area of the classes, not to public area. The assignment is implementable in this way, you do not actually need to add more public variables and methods to the classes.

•    Your program will take a scene as XML file as command line input. You should save the output image as a ppm file. Parsing the XML file, and a method to save the image as ppm file is implemented for you. Some sample scene files are provided to you in inputs directory. Correct outputs of those scenes are also provided in outputs directory.

•    The scene file may contain multiple camera configurations. You should render as many images as the number of cameras. The output filenames for each camera is also specified in the XML file.

•    You will have at most 15 minutes to render scenes for each input file on Inek machines. Programs exceeding this limit will be killed and will be assumed to have produced no image.

•    You should use Blinn-Phong shading model for the specular shading computations.

•    You will implement two types of light sources: ambient and point. Although there will be only one ambient light, there may be multiple point light sources. The intensity of these lights will be given as (r, g, b) triplets.

Point lights will be defined by their intensity (power per unit solid angle). The irradiance due to such a light source falls off as inversely proportional to the squared distance from the light source. To simulate this effect, you must compute the irradiance at a distance of d from a point light as:



where I is the original light intensity (a triplet rgb value given in the XML file) and E(d) is the irradiance at a distance of d from the light source.

Scene File
The scene file will be formatted as a XML file. In this file, there may be different number of materials, vertices, triangles, spheres, lights, and cameras. Each of these are defined by a unique integer id. The ids for each type of element will start from one and increase sequentially. Explanations for each XML tag are provided below:

•    BackgroundColor: Specifies rgb value of the background. If a ray sent through a pixel does not hit any object, the pixel will be set to this color.

•    ShadowRayEpsilon: When a ray hits an object, you are going to send a shadow ray from the intersection point to each point light source to decide whether the hit point is in shadow or not. Due to floating point precision errors, sometimes the shadow ray hits the same object even if it should not. Therefore, you must use this small ShadowRayEpsilon value, which is a floating point number, to move the intersection point a bit further so that the shadow ray does not intersect the same object again. Note that ShadowRayEpsilon value can also be used to avoid self-intersections while casting reflection rays from the interaction point.

•    MaxRecursionDepth: Specifies how many bounces the ray makes off of mirror-like objects. Applicable only when a material has nonzero MirrorReflectance value. Primary rays are assumed to start with zero bounce count.

•    Camera:

–    Position: Defines the coordinates of the camera.

–    Gaze: Defines the direction that the camera is looking at. You must assume that the gaze vector of the camera is always perpendicular to the image plane.

–    Up: Defines the up vector for the camera.

–    NearPlane: Defines the coordinates of the image plane with left, right, bottom, top parameters.

–    NearDistance: Defines the distance to the image plane to the camera.

–    ImageResolution: Defines the resolution of the image as width, and height.

–    ImageName: Defines the name of the output file.

Cameras defined in this assignment will be right-handed. The mapping of Up and Gaze vectors to the camera terminology used in the course slides is given as:

Up = v

Gaze = −w u = v×w AmbientLight: Defined by an intensity rgb triplet. This is the amount of light receive by each object even if the object is in shadow.

•    PointLight: Defined by a position (triplet as x, y, z coordinates) and intensity (rgb triplet).

•    Material: A material can be defined with ambient, diffuse, specular, and mirror reflectance properties for each color channel. Values are floats between 0.0 and 1.0. PhongExponent defines the specularity exponent in Blinn-Phong shading. MirrorReflectance represents the degree of the mirrorness of the material. If this value is not zero, you must cast new rays and scale the resulting color value with MirrorReflectance parameters.

•    VertexData: Each line contains a vertex whose x, y, and z coordinates are given as floating point values, respectively.

•    Triangle: A triangle is represented by Material and Indices attributes. Material attribute represents the material id. Indices are the integer vertex ids of the vertices that construct the triangle (Note that vertices are 1-based, i.e., the index of the first vertex in the VertexData field is 1 not 0.). Vertices are given in counter-clockwise order, which is important when you want to calculate the normals of the triangles. Counter-clockwise order means that if you close you right-hand following the order of the vertices, your thumb points in the direction of the surface normal.

•    Sphere: A sphere is represented by Material, Center, and Radius attributes. Material attribute represents the material id. Center represents the vertex id of the point which is the center of the sphere (Note that vertices are 1-based, i.e., the index of the first vertex in the VertexData field is 1 not 0.). Radius attribute is the radius of the sphere.

•    Mesh: Each mesh is composed of several faces. A face is actually a triangle which contains three vertices. When defining a mesh, each line in Faces attribute defines a triangle. That is, each line is represented by three integer vertex ids given in counter-clockwise order (Note that vertices are 1-based, i.e., the index of the first vertex in the VertexData field is 1 not 0.). Material attribute represents the material id of the mesh.

You can open the sample XML files given to you with a text editor to see and understand the scene

More products