Rendering Primitives

  CU_MDX_Rendering.zip (63.3 KiB, 3,194 hits)


  CUnitFramework.zip (101.1 KiB, 20,406 hits)

Extract the C-Unit Framework
inside the project (not solution) directory.

3D objects are all comprised of vertices. Each vertex can hold certain information such as position, color, texture coordinates, normals, fog data, etc. Managed DirectX contains a series of structures in the CustomVertex namespace that describe several different vertex types. Some examples of vertex types include vertices with a position and color, vertices with a position and normal, and vertices with normals texture coordinates. Why doesn’t DirectX just have one vertex type with all of this data? Well, straight from MSDN: “By using only the needed vertex components, your application can conserve memory and minimize the processing bandwidth required to render models.” We begin by initializing our three vertices that will form a triangle.

/// Initialize application-specific resources, states here. /// True on success, false on failure
private bool Initialize()
{
m_verts = new GraphicsBuffer( 3 );
m_verts.Write( new PositionColored( –2.0f, –2.0f, 5.0f, Color.Red ) );
m_verts.Write( new PositionColored( 0.0f, 2.0f, 5.0f, Color.Green ) );
m_verts.Write( new PositionColored( 2.0f, –2.0f, 5.0f, Color.Blue ) );
return true;
}

When we define geometry without vertex buffers (which we’ll use later), we define the vertices in a GraphicsBuffer. The triangle that we will render will have a position and color, so we create_ a GraphicsBuffer of type CustomVertex.PositionColored. Note that there is a format, TransformedColored, which contains the position in window coordinates as opposed to PositionColored, which specifies the position in 3D space. Notice the coordinates of the triangle. If you draw these points out on paper, keeping in mind a left-handed coordinate system, you’ll notice the vertices are specified in clockwise order. This is known as the winding order of the polygon. The winding order is used to determine whether or not the vertex is facing the camera. If you imagine a triangle on the back side of my current triangle facing away from you and using the same vertex declaration, the winding order would be counter-clockwise with respect to the camera. This would prevent the back triangle from being rendered. The process of eliminating backfacing polygons is known as backface culling. If the triangle were to rotate, it would disappear once the back side faces the camera because the back triangle is never rendered. This reduces the processing load on the GPU.

///
/// This event will be fired immediately after the Direct3D device has been

/// reset, which will happen after a lost device scenario, a window resize, and a
/// fullscreen toggle. This is the best location to create_ Pool.Default resources
/// since these resources need to be reloaded whenever the device is reset. Resources
/// created here should be released in the OnLostDevice callback.
///


/// The Direct3D device

public override void OnResetDevice( Device device )
{

// Set transforms
device.Transform.World = Matrix.Identity;
device.Transform.View = Matrix.Identity;
Size displaySize = m_framework.DisplaySize;

float aspect = (float)displaySize.Width / (float)displaySize.Height;
device.Transform.Projection = Matrix.PerspectiveFieldOfViewLeftHanded( (float)Math.PI / 3.0f, aspect, 0.1f, 1000.0f );

// Set render states
device.RenderState.Lighting = false;
device.RenderState.FillMode = m_framework.FillMode;

}

In order to render 3D geometry, we need to supply DirectX with some matrices that will help transform an object defined in 3D space into screen coordinates. This transformation is known as the fixed-function pipeline. For a more in depth explanation of this math, read MSDN’s page on transforms or Real-Time Rendering. The first matrix we specify is the world matrix. The world matrix determines where our world origin will be located. It controls where in 3D space our geometry will be placed. Next up is the view matrix. The view matrix represents a virtual camera. It tells DirectX where to place the camera, where to point the camera, and which way is up. By setting these two matrices to the identity matrix, we create_ the effect of a camera sitting on the origin pointing down the positive z-axis.

The projection matrix defines how the 3D geometry is projected into 2D space. To create_ the projection matrix, we need to calculate the aspect ratio. If you look at the coordinates of the triangle, you’ll notice that the triangle will fit perfectly in a 4×4 square. However, since monitors and windows may not be perfectly square, if we just copied the geometry straight to the screen, it would appear stretched or squashed because the image fills up the entire viewport and the entire viewport is not a perfect square. The aspect ratio fixes this problem by altering all the geometry prior to transforming the geometry into projection space. This makes all the geometry appear as we would expect it to be after it is projected. We can create_ the projection matrix by calling Matrix.PerspectiveFieldOfViewLeftHanded. The field of view parameter is usually around PI / 3 or 60 degrees.

I also specify a couple of Device.RenderStates. RenderStates control how geometry is rendered. In this example, I turn off lighting and set the FillMode to whatever the current framework fillmode value is (solid, wireframe, or points).

/// Renders the current frame.

/// The Direct3D device
/// Time elapsed since last frame
public override void OnRenderFrame( Device device, float elapsedTime )

{

device.Clear( ClearFlags.Target | ClearFlags.ZBuffer, Color.Black, 1.0f, 0 );
device.BeginScene();

// Set render states since GUI and font changes them when they render
device.SetRenderState( RenderStates.ZEnable, true );

device.SetRenderState( RenderStates.ZBufferWriteEnable, true );
device.SetRenderState( RenderStates.AlphaBlendEnable, false );

// Render triangle
device.VertexFormat = PositionColored.Format;
device.DrawUserPrimitives( PrimitiveType.TriangleList, 1, m_verts );

// Render GUI
m_gui.Render( device );

// Only need to rebuild the text when the FPS updates
if ( m_fps != m_framework.FPS )
{

m_fps = m_framework.FPS;

BuildText();

}
m_bFont.Render( device );

device.EndScene();
device.Present();

}

To render the triangle, we first need to specify what vertex format our vertices are in. We do this by setting the VertexFormat property of the device. Then we can render the triangle by calling Device.DrawUserPrimitives. A user primitive is one defined by a pointer in memory to a GraphicsBuffer of vertices. In this case, the m_verts buffer. In normal applications you will use vertex buffers instead of user primitives because they are a lot faster. However, user primitives make it easy to understand the fundamentals of rendering vertices.

The PrimitiveType parameter defines what type of primitive our vertices form. A triangle list renders a new triangle for every 3 vertices in the rendering pipeline. Triangle strips and fans are often more efficient than using triangle lists because fewer vertices are duplicated. But lists are just fine for such a small application.

Notice before I render the triangle, I set some additional Device.RenderStates. I do this because the C-Unit GUI and BitmapFont systems change some renderstates when they are rendered. So I am just changing them back in order to render the triangle.

Continue on to the next tutorial in the fun-filled world of Managed DirectX.