Sunday, February 19, 2012

Royal colpire(avvitamento) in CG viva

Some viva questions and their answers which might help you score more in lab...

1) What are the applications of Computer Graphics?

Computer Graphics finds applications in Display of Information, Design of architectural buildings and VLSI circuits, Simulation and Animation and in User Interfaces.

2) What are the two types of Display Devices based on CRT Technology?

A Raster CRT and Vector CRT. In Raster CRT the electron beam traces the screen from left to right in row wise order and at the end of the last row retraces back to the first row. SUch devices require Scan Conversion. In Vector CRT or Calligraphic CRT, most often used as oscilloscopes, the electron beam can be moved from any position to any other position. The beam can be turned off to move to any new position. Raster CRT can be further classified as Interlaced and Non-Interlaced.

3) In a Color CRT, What role does the shadow mask play?

The shadow mask made of metal screen has small holes that ensures that an electron beam excites only phosphors of the proper color.

4) What are the Basic Transformations in Computer Graphics?

Translation- Moving objects from one point to another, Rotation- Rotating objects about a fixed point either inside the object or outside, Scaling- Increasing or decreasing the size of the objects and Shear- Causing the objects to change shape as a result of force applied on one side.

5) Give an introduction to the types of geometric primitives that can be drawn with OpenGL?

The type of primitive that needs to be drawn can be specified as parameter to glBegin(). We can draw points(GL_POINTS), lines(GL_LINES), polylines (GL_LINE_STRIP, GL_LINE_LOOP), polygons(GL_POLYGON), triangles and quadrilaterals(GL_TRIANGLES,
GL_QUADS) and strips and fans(GL_TRIANGLE_STRIP, GL_QUAD_STRIP, GL_TRIANGLE_FAN). The vertices of the primitive can be specified using glVertex2f or glVertex3i where f stands for float, d for double and i for integer.

6) How can you specify a viewer?

If you are writing a 2D program then a viewer can be specified using gluOrtho2D with parameters left, right,bottom,top which specifies a viewing rectangle within the projection plane which is the z=0 plane or x-y plane. Using this function will cause all points outside the clipping rectangle to be invisible or clipped.
If it is a 3D program we need to specify a view volume. The view volume shape depends on the type of projection. If it is an Orthographic projection then view volume is a cuboid specified by glOrtho with parameters left,right,bottom,top,near,far. Any 3D object within this volume will be visible and any object outside will be not. The projection plane is again on z=0 plane.
If it is a perspective projection we can specify a view volume which is in the shape of a frustum of a pyramid using glFrustum with parameters left,right,bottom,top,near,far or using gluPerspective with parameters fieldofview, aspectratio , near, far.
We can specify a viewer position in such cases using gluLookAt with parameters eyex,eyey,eyez,atx,aty,atz,upx,upy,upz.
Remember that a 3D program can also draw 2D objects and not vice-versa.

7) What is hidden surface removal?

Hidden surface removal or visible surface determination is a technique used to achieve realism in 3D. When objects(polygons) are drawn on screen the final image generated will show polygons drawn last completely overlapping those drawn before. We should not draw polygons which are not visible. Hence there is an order in which if we draw the polygons they will look real. This method of enforcing an order while drawing polygons is called painter's algorithm. There are other techniques like z-buffer algorithm which rely on removing parts of the image which are farthest from the viewer and hence obscured by other polygons. While projecting points on a 2D surface the transformation is not invertible as all points lying on a projector map onto the same point in the image. To perform hidden surface removal we retain depth information(- distance along a projector-) as long as possible in the pipeline.

8) what is projection normalization?

We use a technique called projection normalization, which converts all projections into orthogonal projections by first distorting the objects such that the orthogonal projection of the distorted objects is the same as the desired projection of the original objects.

9) What is the minimum refresh rate required for real-time video?

Screen needs to be refreshed to draw slightly different images in a video. The small changes in the images is visible if the refresh rate is less than 24 frames per sec. Hence real-time video refers to a refresh rate of at least 24 images per sec. Images consequently need to be generated within 1/24th of a second or 41.67 millisecond.

10) What are the stages of a graphics pipeline? How is pipelining useful?

The graphics pipeline contains four stages of Vertex Processing, Clipping and Primitive Assembly, Rasterization and Fragment Processing. Pipelining is used whenever multiple sets of data need to be processed the same way. For example a complex graphics scene might consist of millions of polygons which need to be processed. If the vertices of the polygons are sent into the pipeline one at a time then the four stages mentioned earlier can process them in parallel. Pipelining increases the throughput of processing the data elements appreciably while the latency of processing each element increases slightly.

11) What are the differences between additive colors and subtractive colors?

Examples of additive color devices is a CRT monitor or projectors or Slide(positive) film. Examples of subtractive colors is color printers which use cyan, magenta and yellow colors. In additive color the primary colors like Red, Green and Blue add together to give the perceived color. With additive color, primaries add light to an initially black display yielding the desired color. In subtractive color we assume that white light hits the surface, a particular point will be red if all the components of the incoming light are absorbed by the surface except for wave lengths in the red part of the spectrum, which are reflected.

12) What is a color look-up table?

Suppose that the frame buffer has k-bits per pixel. Each pixel value or index is an integer between 0 and 2k-1. Suppose that we can display colors with a precision of m bits ie 2m reds,2m greens and 2m blues hence we can display any of the 23m colors but the frame buffer can specify only 2k of them. We handle this through a user defined lookup table that is of size 2k x 3m. The user program fills the 2k entries(rows) of the table with the desired colors. Once the LUT is populated, we can specify a color by its index in the LUT. For k=m=8 a common configuration we can choose 256 colors to be used in any image out of 16 million colors. These 256 colors are called the pallet.

13) What is frame buffer? What value is stored in the frame buffer? What is color depth? What is double buffering?

Frame buffer also called video RAM is a memory buffer which stores the image that is currently being displayed on the display screen.
The frame buffer stores the pixel values. The number of pixel values depends on the horizontal and vertical resolution of the display. Ex: 1024x768.
Color depth is the number of bits required to store a pixel value, usually mentioned as bits per pixel.
Double buffering is used to speed up animation. Animation requires us to draw objects with slight displacements. These calculations need to be done in real time at the rate of 24 frames per sec for smooth animations. Double buffering can help to refresh image displayed on screen with just a change in the base address of the VRAM(video RAM). Double bufferring is used when objects on screen are moving. Usually when double buffering is used the frame buffer size is twice that required to store one image.

14) How are shadows generated in OpenGL?

A shadow in order to be drawn needs to be generated by projection of light rays coming from a light source. In Viewing projections are already used to generate images. The same transformations can be reused to generate shadows. The Centre of Projection(COP) is now the position of the light source. The shadow is assumed to be produced on a projection plane say on the ground(y=0 plane). The shadow polygon so generated is drawn with shadow color and passes through graphics pipeline as would any other primitive.
15) How can a polygon have a 2D-texture on its surface?

At various stages we will work with screen coordinates, object coordinates, texture coordinates which we use to locate position within the texture and parametric coordinates which is used to describe curved surfaces.A 2D texture can be created by populating an array such as
GLubyte my_texels[512][512][3];
This array is then passed as parameter to the function
glTexImage2D(GL_TEXTURE_2D, 0,GL_RGB,512,512,0,GL_RGB,GL_UNSIGNED_BYTE, my_texels);
 the parameters meaning target,level,iformat,width,height,border,format,type,tarray. Level and border give us fine control over how texture is handled. We must enable texture mapping as we do other options using
glEnable(GL_TEXTURE_2D);
There are seperate memories like physical memory, frame buffer, texture memory. Texture coordinates should be mapped to the vertices by calling glTexCoord2f(s,t);
A following block of code will assign texture to a quadrilateral
glBegin(GL_QUAD);
glTexCoord2f(0.0,0.0);
glVertex3f(x1,y1,z1);
glTexCoord2f(1.0,0.0);
glVertex3f(x2,y2,z2);
glTexCoord2f(1.0,1.0);
glVertex3f(x3,y3,z3);
glTexCoord2f(0.0,1.0);
glVertex3f(x4,y4,z4);
glEnd();


16) Which transformations are called rigid body transformations?

Translation and rotation do not change the shape of the body undergoing trasformation hence they are called rigid body transformations.


17) What is composition of transformations?

Basic transformations like translation and rotation when applied successively one after another then it is called composition of transformations. Ex: if object is denoted by O and translation and rotation by T,R then composition of transformations is when after applying T you apply R on the result ie R(T(O)) ie same as (RT) O. ie we can say the transformation matrices for translation and rotation can be multiplied to get a resultant matrix which when multiplied with the object(points) gives a transformed object(wrt position and orientation). Order of transformations is important as RT is not same as TR.
(plz remember rot_mat of 4th program which was result of multiplying=> TRT)


18) When do you use GLUT_DEPTH in call to glutInitDisplayMode()?

GLUT_DEPTH is used in call to glutInitDisplayMode when the objects we draw are having three coordinates ie x,y,z. We would be calling glVertex3i or glVertex3f instead of glVertex2i or 2f. We have to call glEnable(GL_DEPTH_TEST); and clear the depth buffer in display function by calling glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);


19) How big are the two matrices GL_PROJECTION and GL_MODELVIEW?

The two matrices are 4x4 in size with sixteen elements.


20) Why do you need to call glFlush()?

After drawing anything the object drawn does not appear on screen immediately untill glFlush is called.


21) What is a logic operation?

While drawing objects on screen the bits in the frame buffer are being modified. If a line is being drawn with white color then bits representing white are being set in specific positions corresponding to points on the line. If these bits representing white are operated bitwise on the pixel values already in the frame buffer then it becomes a logic operation. Various logic operations like AND, NOT, XOR ,OR can be performed on the bits in the frame buffer.


22) How do you set material properties of objects?

We can set material properties using call to glMaterialfv or glMaterialf.


23) How do you set properties of light sources?

Light sources (max 8 in a program) denoted by GL_LIGHT0 to GL_LIGHT7 can have properties such as position and intensities of red, green, and blue as well as ambient, diffuse and specular light. We can set these using call to glLightfv or glLightf.


24) Why do we call glPushMatrix() and glPopMatrix()?

Properties such as current color or raster position can all be pushed onto the stack and retrieved later. In between push and pop we can change these properties without affecting parts of the code following the pop.


25) Why do we need to call glutSwapBuffers() when using double buffering?

When using double bufferring, there are two buffers GL_FRONT and GL_BACK. We will usually be drawing in one of them while contents of the other buffer are being displayed. After drawing we make that buffer display and start drawing in the buffer which was earlier being displayed. This is accomplished using call to glutSwapBuffers();


26) How do you set the size and position of a window?

We can set the size and position of a window by calling glutInitWindowSize(width,height) and glutInitWindowPosition(x,y);

(colpire (avvitamento) is italian word for bashing(screwing))