Texture Compression, Filters, and Transformations

  CU_MDX_TextureTechniques.zip (352.5 KiB, 3,078 hits)


  CUnitFramework.zip (101.1 KiB, 20,443 hits)

Extract the C-Unit Framework
inside the project (not solution) directory.

In this tutorial, we’ll learn about various techniques we can accomplish when working with textures. Specifically, we will learn about texture compression, filtering methods, and how to transform a texture on the surface of a polygon:

Texture Techniques

Compressing a texture reduces the amount of memory required to store it. This, of course, is a good thing since it allows us to either store more textures or store bigger, more detailed textures. MSDN has a section describing compressed textures so I suggest browsing through those pages. Also, I found a nice article about texture compression in general. And the Unreal Developer Network has a snippet on compressed textures in DirectX, although it refers to DirectX 8. Using compressed textures in Managed DirectX is a surprisingly simple process.

To use compressed textures, we just specify a compressed texture format when we create_ a new Texture. And…that’s it. Not all video cards support compressed texture formats though, so to test if the hardware supports compressed formats, call Manager.CheckDeviceFormat. I, however, was a naughty boy and didn’t perform this test since apparently, a GeForce 2 can handle texture compression. So if the code doesn’t run on your machine, for goodness sake, buy a new video card already! Or you can try putting in the above check. Moving on, there are five different texture compression Formats: Format.Dxt1 through Format.Dxt5. The differences between the five formats are how they handle the alpha values.

MSDN has some pages describing the compressed texture formats in detail, but you don’t need to know all this to use compressed textures. Dxt1 is a 4-bit color format with 1-bit used as an alpha value. This means that a pixel is either fully opaque or fully transparent. This is the most compressed format and since I won’t be needing any alpha values, I use Format.Dxt1 for this tutorial. Dxt2 and Dxt3 are 4-bit color formats with four bits used for an alpha channel. These formats use what is called explicit texture encoding. This means that each texel has its own individual 4-bit alpha channel. We would want to use these formats if we wanted to have 16 alpha levels as opposed to the two levels offered by Dxt1. The difference between Dxt2 and Dxt3 is that in the Dxt2 format, the color values have been premultiplied by their corresponding alpha values. A premultiplied alpha channel is an optimization that prevents a lot of multiplications from occuring:

When a pixel is rendered with an alpha value, it needs to be blended with whatever is already on the backbuffer. This calculation is as follows:

FinalColor = (ImagePixelColor * ImagePixelAlpha) + (BackgroundPixelColor * (1 – ImagePixelAlpha))

Since ImagePixelColor and ImagePixelAlpha are in the same texture file and will never change, we can premultiply them so we don’t have to perform this multiplication everytime we render a texture. This leaves us with the following calculation:

FinalColor = ImagePixelPreMultiplied + (BackgroundPixelColor * (1 – ImagePixelAlpha))

As you can see, we eliminated a multiplication-per-texel by using a premultiplied alpha channel. If we use a 512×512 texture, this save us up to 262,144 multiplications everytime we render the texture!

Dxt4 and Dxt5 are 4-bit color formats with three bits used for each texel’s alpha channel. However, these 3-bit alpha channels are interpolated values between two 8-bit alpha values that are assigned to every 4×4 block of texels. Try drawing that out on paper if it’s a bit confusing. These formats are the least compressed of the five formats and are used when the alpha value on a texture doesn’t vary much. Although, since these are the least compressed and aren’t as accurate as Dxt2 or Dxt3, you probably won’t be using Dxt4 or Dxt5 much. The difference between Dxt4 and Dxt5 is that Dxt4 uses a premultiplied alpha channel as explained above.

// Create the texture.
if ( materials[i].TextureFileName != null && materials[i].TextureFileName.Length > 0 )
{
string texture = System.IO.Path.GetFileName( materials[i].TextureFileName );
texture = Utility.GetMediaFile( texture );
D3D.ImageInformation info = D3D.Texture.GetImageInformationFromFile( texture );
m_textures[i] = new D3D.Texture( device, texture, info.Width, info.Height, 0, D3D.Usage.None, D3D.Format.Dxt1, D3D.Pool.Managed, D3D.Filter.Linear, D3D.Filter.Linear, 0, false, null );
//m_textures[i] = new D3D.Texture( device, texture );

}

Since I’ll be using some .x meshes, I put in a quick adjustment to my Mesh class. To use compressed textures, simply use a different Texture overload. Notice I’m using the Format.Dxt1 compressed format as I mentioned earlier. You can experiment and see what the performance gain is by running the program with and without compressed textures by switching between the two overloads displayed above. By using compressed textures, I was seeing a 15-20% performance boost. With texture compression behind us, let’s move on now to texture filtering methods.

MSDN has a nice section on texture filtering, which is a good place to start before reading on. When DirectX renders a texture to the screen, the number of pixels that it spans may not be the same number of texels that make up the texture. The texture may be zoomed in or out or displayed at an angle. In these cases, DirectX has to decide what color to select from the texture to display to the screen. This process is called texture filtering.

DirectX provides three different methods of texture filtering: point filtering, linear filtering, and anisotropic filtering. When DirectX samples a texture, it computes the texel address of the color that it wants to grab from the texture. Point filtering takes the texel address, which may not map directly to an integer value, and samples from the texel with the closest integer value. So if we wanted the texel color that’s supposed to be at (37.2, 20.4), point filtering would grab the color of the texel at (37, 20). So really, point filtering isn’t really filtering at all. It’s simply sampling the texture, which is why MSDN calls it Nearest-Point Sampling. Nearest-Point Sampling may be fast, but it looks horrible. You may be wondering how DirectX is supposed to choose a color when the texture address doesn’t map directly to a single texel. DirectX calculates the texture address of where the color should be located on the texture. This address could end up anywhere within a 2×2 block of texels. So which color is DirectX supposed to sample? Point filtering simply takes the color of the texel with the closest integer address. A more accurate approach would be to take a weighted average of the 4 texel colors in the 2×2 block of texels. This is what linear filtering accomplishes.

Linear filtering computes the final color by taking a weighted average of the four texels closest to the sampling point. As a result, textures appear smoother and sometimes a little blurry. You may have seen a video option in games called bilinear filtering. Bilinear filtering is the form of linear filtering that DirectX uses. Since bilinear filtering looks a lot better than point filtering and since it is implemented in hardware by modern graphics cards, you’ll usually always want to use linear filtering instead of point filtering.

A problem with linear filtering becomes apparent when a texture is viewed from an angle. When viewed at an angle, a single monitor pixel may map to several different texels. Linear filtering, however, only computes the final color using a 2×2 block of texels. This lack of texel information can lead to ugly results for textures viewed at an angle. To calculate the final color using more texels, we use anisotropic filtering.

Anisotropic filtering maps a screen pixel into texture space in order to find all the texels that need to be considered when calculating the final pixel color. Anisotropic filtering uses a value called the level, or degree, of anisotropy to determine the quality of anisotropic filtering. A value of 1 means no anistropic filtering at all while the maximum value, which differs from video card to video card, gives the highest quality filtering. This maximum value is found in the Capabilitiess.MaxAnisotropy property.

The above filters can be used when a texture is either magnified or minified. When we move closer to a texture, the displayed image may be larger than the actual texture stored in the file, so we need to use a texture filter to calculate a texture color. When we move further away from a texture, the displayed image may be smaller than the stored texture file. The filtering process determines what color to display for a given pixel. Both the magnification and minification processes store separate filter types, which you will see in code later on.

A third texture filtering process available is mipmap filtering. Mipmaps are different sized copies of the same texture:

Mipmaps

Mipmaps are used to render a texture at various distances from the camera. Each successive mipmap is half the size of the previous mipmap. So if we start with a 256×256 image, our mipmaps will be 128×128, 64×64, 32×32, 16×16, all the way down to 1×1. When we load in a texture with TextureLoader.FromFile, there is a parameter called mipLevels, which specifies how many mipmaps we want TextureLoader to generate for us. If we specify 0, mipmaps are generated all the way down to the 1×1 size. Mipmaps are useful because it is more efficient for DirectX to display a smaller version of a texture than it is to display a larger version scaled down. DirectX determines when it needs to change mipmaps for us automatically so we don’t even have to worry about it. We can, however, choose the manner in which DirectX transitions between mipmap levels. This is where mipmap filtering comes in.

Mipmap filtering can use either point filtering or linear filtering. When mip point filtering is used, DirectX samples from the nearest mipmap. While this is faster than linear mip filtering, this could result in noticeable popping artifacts where we can see when a texture changes mipmaps. In comparison, linear mip filtering samples from the two surrounding mipmaps with a weighted average. This creates a smooth transition between mipmap levels:

Mipmap Point FilterMipmap Linear Filter

When linear filtering is used for both the magnification and minification filters as well as the mipmap filter, this is called trilinear filtering, which you may have already heard of. Anisotropic filtering has better results than linear filtering but anisotropic filtering is only available for the minification and magnification filters. Below are some sample screenshots I took using various filtering methods.

No filter
Point filters/no mipmapping.
Point filter
Point filters.
Linear filter
Linear filters.
Anisotropic filter
Anisotropic filters.

As you can see the quality greatly improves as we transition from point filtering to linear filtering to anisotropic filtering.

The effects texture filtering in the downloaded source code are faint but if you look closely, there is a subtle shift in the texture quality.

switch ( m_filter )
{
case TextureFilter.Anisotropic:

if ( device.Capabilities.TextureFilterCapabilities.SupportsMagnifyAnisotropic )
{

device.SamplerState[0].MagFilter = TextureFilter.Anisotropic;
device.SamplerState[0].MaxAnisotropy = m_anisoLevel;

}
if ( device.Capabilities.TextureFilterCapabilities.SupportsMinifyAnisotropic )
{

device.SamplerState[0].MinFilter = TextureFilter.Anisotropic;
device.SamplerState[0].MaxAnisotropy = m_anisoLevel;

}
// Mip filter can’t use Anisotropic so we’ll set to Linear
device.SamplerState[0].MipFilter = TextureFilter.Linear;
break;

case TextureFilter.Linear:

device.SamplerState[0].MagFilter = TextureFilter.Linear;
device.SamplerState[0].MinFilter = TextureFilter.Linear;
device.SamplerState[0].MipFilter = TextureFilter.Linear;
break;

case TextureFilter.Point:

device.SamplerState[0].MagFilter = TextureFilter.Point;
device.SamplerState[0].MinFilter = TextureFilter.Point;
device.SamplerState[0].MipFilter = TextureFilter.Point;
break;

case TextureFilter.None:

// Mag and Min filters use TextureFilter.Point as no filter
device.SamplerState[0].MagFilter = TextureFilter.Point;
device.SamplerState[0].MinFilter = TextureFilter.Point;
device.SamplerState[0].MipFilter = TextureFilter.None;
break;

}

The above code sample shows how to activate the various filters. To set a filter, simply assign a TextureFilter enumeration to the desired filter property found in the Device.SamplerState property. Not all video cards support the all the filtering methods, so to see if the hardware supports a specific filter, check the TextureFilterCapabilities value found in the Capabilities property. With texture filters no covered, we will move on to the final topic of this tutorial: texture coordinate transformations.

Texture coordinate transformation is the process of transforming a texture while it is applied to a polygon. This can be used to animate textures to create_ effects such as flowing water and lava, or moving clouds in the sky. To transform a texture on a polygon, we specify a texture transform matrix much like the world transform matrix that we use to transform 3D geometry. As a result, textures can be translated, rotated, and scaled by manipulating the texture transform matrix.

/// Updates a frame prior to rendering.
/// The Direct3D device
/// Time elapsed since last frame
public override void OnUpdateFrame( Device device, float elapsedTime )
{
m_waterTextureMatrix.M31 += 0.5f * elapsedTime;
if ( m_waterTextureMatrix.M31 > 1.0f )
{
m_waterTextureMatrix.M31 -= 1.0f;

}

}

When we translate 3D objects, the translation values are found in the fourth row of the world transformation matrix. However, since textures are specified in a 2D coordinate space, the translation values in the texture transform matrix are located in the third row of the texture transform matrix. Note, however, there is the option to work with 1D, 2D, 3D and even 4D texture coordinates, in which cases the translation values would move accordingly.

The above code shows me moving the texture along its U axis every frame. This will create_ the effect of water flowing down a river.

/// Renders the current frame.
/// The Direct3D device
/// Time elapsed since last frame
public override void OnRenderFrame( Device device, float elapsedTime )
{
device.Clear( ClearFlags.Target | ClearFlags.ZBuffer, Color.Black, 1.0f, 0 );
device.BeginScene();

// Clip…

m_ground.Render( device );

device.SetTextureState( 0, TextureStates.TextureTransform, (int)TextureTransform.Count2 );
device.Transform.Texture0 = m_waterTextureMatrix;
m_water.Render( device );
device.SetTextureState( 0, TextureStates.TextureTransform, (int)TextureTransform.Disable );

// Clip…

device.EndScene();
device.Present();

}

To take advantage of texture coordinate transformations, we first need to enable it by calling Device.SetTextureState. Since we are working with 2D texture coordinates, we use the TextureTransform.Count2 enumeration. Once texture transforms are activated we set the texture transform matrix through the Device.Transform property. After the geometry is rendered, we can disable texture transformations by calling Device.SetTextureState using the TextureTransform.Disable flag.

That wraps up all I want to talk about in this tutorial. Until next time, laters.

One thought on “Texture Compression, Filters, and Transformations”

Comments are closed.

Leave A Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes:
<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

<code> For inline code.
[sourcecode] For multiline code.