DirectX 10/11 – Basic Shader Reflection – Automatic Input Layout Creation

This is gonna be a very brief post on a pretty simple but powerful tool for any hobby programmers tinkering with Direct3D. One of the most annoying things, at least for me, was always having to manually create input layouts for each different vertex shader I used in my hobby projects. I have now restarted work on my hobby game engine and have upgraded the renderer to DX11 and in doing so ripped out the DirectX effects framework. I’m actually planning on writing a post on moving from DX10 to DX11 but thats for another time. Right now lets get back to input layouts, if you can remember the input layout basically describes the memory layout of the input to the vertex shader program ( i.e. the input HLSL struct). In all my previous tutorials, we had to describe this input layout by hand and then create it using the compiled vertex shader for validation. We used to do this as follows:

const D3D10_INPUT_ELEMENT_DESC vertexInputLayout[] =
{
	{ "ANCHOR", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 0, D3D10_INPUT_PER_VERTEX_DATA, 0 },
	{ "DIMENSIONS", 0, DXGI_FORMAT_R32G32_FLOAT, 0, 8, D3D10_INPUT_PER_VERTEX_DATA, 0 },
	{ "OPACITY", 0, DXGI_FORMAT_R32_FLOAT, 0, 16, D3D10_INPUT_PER_VERTEX_DATA, 0 }
};

const int vertexInputLayoutNumElements = sizeof(vertexInputLayout)/sizeof(vertexInputLayout[0]);

D3D10_PASS_DESC PassDesc;

pTechnique->GetPassByIndex( 0 )->GetDesc( &PassDesc );
if ( FAILED( pD3DDevice->CreateInputLayout( vertexInputLayout,
											vertexInputLayoutNumElements,
											PassDesc.pIAInputSignature,
											PassDesc.IAInputSignatureSize,
											&pVertexLayout ) ) )
{
	return fatalError("Could not create Input Layout!");
}

The above sample is still using the effects framework but that’s irrelevant. There is another problem with this approach with regards to flexibility, if you wish to swap shaders on the fly, they either all need to use the same vertex shader input layout or you need to somehow create all your input layouts in advance and then link them with the shaders you wish to load. Well, considering the fact that we use the compiled shader to actually validate the input layout then that means that the compiled shader has all the necessary information available for us to create the input layout. Basically we’re just going to reverse engineer the validation step to create an input layout based on the shader.

Now if we don’t use the effects framework then we have to manually compile our shaders. I’m not going to go into much detail regarding this as the info is readily available in the SDK tutorials. Once you have compiled your shader you have the compiled shader blob which we can then use to create the final vertex shader program as shown below:

if( FAILED( D3DX11CompileFromFile( pFilename, NULL, NULL, pFunctionName, "vs_4_0", shaderFlags, NULL, NULL, &pCompiledShaderBlob, &pErrorBlob, NULL ) ) )
{
	if ( pErrorBlob != NULL )
	{
		PRINT_ERROR_STRING( pErrorBlob->GetBufferPointer() );
		pErrorBlob->Release();
	}
	return false;
}

HRESULT hr = pD3DDevice->CreateVertexShader( pCompiledShaderBlob->GetBufferPointer(), pCompiledShaderBlob->GetBufferSize(), NULL, &shader.pVertexShader);

We can very easily create our input layout using the following function:

HRESULT CreateInputLayoutDescFromVertexShaderSignature( ID3DBlob* pShaderBlob, ID3D11Device* pD3DDevice, ID3D11InputLayout** pInputLayout )
{
	// Reflect shader info
	ID3D11ShaderReflection* pVertexShaderReflection = NULL;
	if ( FAILED( D3DReflect( pShaderBlob->GetBufferPointer(), pShaderBlob->GetBufferSize(), IID_ID3D11ShaderReflection, (void**) &pVertexShaderReflection ) ) )
	{
		return S_FALSE;
	}

	// Get shader info
	D3D11_SHADER_DESC shaderDesc;
	pVertexShaderReflection->GetDesc( &shaderDesc );

	// Read input layout description from shader info
	std::vector<D3D11_INPUT_ELEMENT_DESC> inputLayoutDesc;
	for ( uint32 i=0; i< shaderDesc.InputParameters; i++ )
	{
		D3D11_SIGNATURE_PARAMETER_DESC paramDesc;
		pVertexShaderReflection->GetInputParameterDesc(i, &paramDesc );

		// fill out input element desc
		D3D11_INPUT_ELEMENT_DESC elementDesc;
		elementDesc.SemanticName = paramDesc.SemanticName;
		elementDesc.SemanticIndex = paramDesc.SemanticIndex;
		elementDesc.InputSlot = 0;
		elementDesc.AlignedByteOffset = D3D11_APPEND_ALIGNED_ELEMENT;
		elementDesc.InputSlotClass = D3D11_INPUT_PER_VERTEX_DATA;
		elementDesc.InstanceDataStepRate = 0;	

		// determine DXGI format
		if ( paramDesc.Mask == 1 )
		{
			if ( paramDesc.ComponentType == D3D_REGISTER_COMPONENT_UINT32 ) elementDesc.Format = DXGI_FORMAT_R32_UINT;
			else if ( paramDesc.ComponentType == D3D_REGISTER_COMPONENT_SINT32 ) elementDesc.Format = DXGI_FORMAT_R32_SINT;
			else if ( paramDesc.ComponentType == D3D_REGISTER_COMPONENT_FLOAT32 ) elementDesc.Format = DXGI_FORMAT_R32_FLOAT;
		}
		else if ( paramDesc.Mask <= 3 )
		{
			if ( paramDesc.ComponentType == D3D_REGISTER_COMPONENT_UINT32 ) elementDesc.Format = DXGI_FORMAT_R32G32_UINT;
			else if ( paramDesc.ComponentType == D3D_REGISTER_COMPONENT_SINT32 ) elementDesc.Format = DXGI_FORMAT_R32G32_SINT;
			else if ( paramDesc.ComponentType == D3D_REGISTER_COMPONENT_FLOAT32 ) elementDesc.Format = DXGI_FORMAT_R32G32_FLOAT;
		}
		else if ( paramDesc.Mask <= 7 )
		{
			if ( paramDesc.ComponentType == D3D_REGISTER_COMPONENT_UINT32 ) elementDesc.Format = DXGI_FORMAT_R32G32B32_UINT;
			else if ( paramDesc.ComponentType == D3D_REGISTER_COMPONENT_SINT32 ) elementDesc.Format = DXGI_FORMAT_R32G32B32_SINT;
			else if ( paramDesc.ComponentType == D3D_REGISTER_COMPONENT_FLOAT32 ) elementDesc.Format = DXGI_FORMAT_R32G32B32_FLOAT;
		}
		else if ( paramDesc.Mask <= 15 )
		{
			if ( paramDesc.ComponentType == D3D_REGISTER_COMPONENT_UINT32 ) elementDesc.Format = DXGI_FORMAT_R32G32B32A32_UINT;
			else if ( paramDesc.ComponentType == D3D_REGISTER_COMPONENT_SINT32 ) elementDesc.Format = DXGI_FORMAT_R32G32B32A32_SINT;
			else if ( paramDesc.ComponentType == D3D_REGISTER_COMPONENT_FLOAT32 ) elementDesc.Format = DXGI_FORMAT_R32G32B32A32_FLOAT;
		}

		//save element desc
		inputLayoutDesc.push_back(elementDesc);
	}		

	// Try to create Input Layout
	HRESULT hr = pD3DDevice->CreateInputLayout( &inputLayoutDesc[0], inputLayoutDesc.size(), pShaderBlob->GetBufferPointer(), pShaderBlob->GetBufferSize(), pInputLayout );

	//Free allocation shader reflection memory
	pVertexShaderReflection->Release();
	return hr;
}

What this function does is peek inside the compiled shader using a shader reflection. A shader reflection is simply an interface for reading all the shader details at runtime. We first reflect the shader information using the D3DReflect function and then get the description of the shader using the reflection interface. From this description we can see how many input elements the vertex shader takes and then for each one we can get it descriptions. Using this data we can fill out our inputlayoutdesc structure. The above function is very simple and is not robust to be able to handle any shader thrown at it. I quickly slapped it together just to get things up and running with the intention of extending it as needed in the future. I just figured its something, that at least to my knowledge, isn’t readily known or mentioned and I figured just pointing it out would be useful for any hobbyist programmers  😛

Oh btw, before I forget to actually use the shader reflection functions you need to include the D3DCompiler.h header and link against the D3Dcompiler.lib static library.

DirectX10 Tutorial 10: Shadow Mapping Part 2

This is the second part of my shadow mapping tutorial (find part 1 here). I am going to discuss some of the issues of the shadow mapping technique as well as some basic solutions to the problem. I’m also going to attach a demo application, in which you can play around with some of the shadow mapping variables and techniques. I do have to mention that the demo was a very quickly programmed prototype. It serves its purpose as a demo application but I dont advise cutting and pasting the code into your own projects as a lot of the techniques used arent exactly optimal. Not to mention that I did take a few shortcuts here and there. Continue reading “DirectX10 Tutorial 10: Shadow Mapping Part 2”

DirectX10 Tutorial 10: Shadow Mapping Part 1

I’ve had some downtime lately and seeing as I wrote a basic shadow mapping demo, I figured I’d write a short tutorial on the theory and implementation of shadow mapping. Shadow mapping is one of those topics that tends to get explained in a overly complicated manner when in fact the concept is rather simple. It is expected that you understand the basic of lighting before attempting this tutorial, if you want to learn more about some basic lighting models please read my lighting tutorial. The figure below shows a sample scene with a single light illuminating a cube.

How shadows are formed

Continue reading “DirectX10 Tutorial 10: Shadow Mapping Part 1”

Debugging HLSL

A lot of guys have asked me for advice on developing and debugging shader programs. Well, it’s a tricky topic to deal with. Tto be able to fully debug shader programs you will need either a shader IDE like FXComposer or a GPU debugging tool like Nvidia Nsight. These are both complex tools and beyond the scope of a quick tutorial but what I can do if provide you a quick guide to aid you in writing shaders directly within Visual Studio. You will not be able to perform any sort of in-depth debugging, but it will help you deal with silly syntax errors. The first thing need is NShader. NShader is a shader syntax highlighting plugin for visual studio and helps with clarity when editing and writing shader programs.

The second thing is to create a custom build step within Visual Studio for your shader programs. This custom build step will use the MS shader compiler to compile your shader programs and notify you of any errors as well as tell you on which lines the errors can be found. To do this, we first select our shader file and right-click then select properties (see figure 1 below).

Figure 1: Shader Code File Properties

Doing so will bring up the properties windows, the first step is to ensure that the configuration drop down is set to “all configurations”. The select Item Type and choose “Custom Build Tool” from the drop down (see figure 2).

Figure 2: Enable Custom Build Tool Step

Click Apply, this will then show the custom build menu tab on the left hand side. Select the tab and you be presented with the following dialog window:

Figure 3: Set Custom Build Tool Parameters

Under the general heading, set the command line value to the following:

"$(DXSDK_DIR)Utilities\bin\x86\"fxc.exe  /T fx_4_0 /Fo "%(Filename).fxo" "%(FullPath)"

This will mean that every time the shader file is modified and the solution is compiled, the shader file will be compiled using the microsoft FX compiler (FXC). The /T flag specifies the type of shader file being compile (i.e. the HLSL version). the /Fo flag refers to the compiled output file and the %(FullPath) macro refers to the full path of the current shader file.

Also set the Custom Build Tool outputs to: $(filename).fxo , this is the same as specified in the FXC commandline. Click OK, and you are done.

The results of the this process is shown below, all HLSL errors will pop up in the Visual Studio Error Dialog, double clicking on the error will take you to the line of code that caused the error.

Figure 4: The results of the Custom Build Step

I’ve wasted a ton of time attempting to find silly syntax errors when developing shader programs, and this little custom build step has been a great help. I hope it helps you in some way as well…

DirectX10 Tutorial 9: The Geometry Shader

This tutorial is going to cover the basics of using the geometry shader stage present in DX10+. The geometry stage is extremely useful for rendering sprites, billboards and particles systems. This is the first part of a three part series which will cover geometry shaders, billboards and particles systems.

The Geometry Shader

The geometry shader stage was introduced in DX10 and people initially assumed that it would be useful for tessellation purpose (which is true) but it’s more useful for use with particles systems and sprite rendering. The geometry stage sits between the vertex and pixel shader stages and its primary use is creating new primitives from existing ones.

Just to recap, vertices are sent to the vertex shader within a vertex buffer that is stored on the GPU.  A draw call issued to the API, sends a vertex buffer down the pipeline.  Each vertex first enters the vertex shader where it is transformed as necessary, and its vertex data is modified (if necessary). Once vertices have been processed and outputted by the vertex shader, they get combined into primitives during the primitive setup stage of the API. The type of primitive created from the vertices sent through the vertex buffer depends on the primitive topology set (points, lines and triangles).  Normally, once a primitive is constructed, it moves on to the screen mapping and fragment generation (convert triangle to pixels) stages before reaching the pixel shader stage and finally being drawn to the screen. Continue reading “DirectX10 Tutorial 9: The Geometry Shader”

DirectX10 Tutorial 8: Lighting Theory and HLSL

This tutorial will deal with the basic scene lighting. It will cover the basic Phong and Blinn-Phong reflection models and the per-vertex (Gouraud) and per-pixel shading models (Phong). The Blinn-Phong reflection model is used in the openGL fixed-function pipeline (and as far as I know also used in DX9 but I’m not 100% sure).  Modern games don’t really used these shading models any more as they are very expensive especially when there are numerous objects and light sources in a scene and much more efficient techniques such as deferred shading are pretty much the industry standard at the moment in regards to scene lighting and shadowing. Even though the shading models may have changed and aren’t really all that relevant, the reflection models explained here are still in use. Continue reading “DirectX10 Tutorial 8: Lighting Theory and HLSL”

DirectX10 Tutorial 7: Viewports

This is going to be a very brief tutorial; the idea for it came about from a comment on my very first tutorial about using multiple viewports. I assumed that using multiple viewports would be a simple matter of just calling a SetViewport method just like in DX9, but it isn’t. I tried finding some info online but there is absolutely nothing available so I had to figure it out on my own. There are two methods to get multiple viewports working. The first requires a state change when selecting the viewports but I don’t think that the cost of that is too prohibitive since you would probably only swap viewports once per viewport during the scene rendering. The second method involves using a geometry shader to specify which viewport to use during the clipping/screen mapping stage in the pipeline.

What is a viewport

Well lets first discuss what a viewport actually is, if you Google a bit you’ll find almost no information regarding viewports or what they actually are (and there is absolutely no info in the DX documentation). A viewport is a rectangle that defines the area of the frame buffer that you are rendering to. Viewports do have depth values which affect the projected z range of any primitives in the viewport but this is only used in very advanced cases so you should always set the near depth to 0 and the far depth to 1. If we imagine a car game in which we have a rear view mirror, a simple method to draw the rear view mirror contents is to set the viewport to the mirror area, rotate the camera to face backwards and render the scene. Another common use in games is when you see another player’s viewpoint within your HUD (Ghost recon does this quite often), once again to render this all that is require is to set the viewport to the area of your final image you want to render to, then you render the scene from the other players viewpoint. Continue reading “DirectX10 Tutorial 7: Viewports”

DirectX10 Tutorial 6: Blending and Alpha Blending

It’s been ages since my last DirectX 10 tutorial and I apologize, I’ve been buried under a ton of work and haven’t had much free time lately. This is going to be a very short tutorial on pretty much the final stage in the rendering pipeline: color blending. If you recall, each frame displayed on the screen is simply a bitmap that gets displayed and updated multiple times a second. This bitmap is called the frame buffer, now the frame buffer is technically the image we see at any given point and the back buffer (assuming you are double buffering) is what you actually draw to (referred to as your render target) and only once you finish drawing do you display the back buffer to the screen by swapping the frame buffer and the back buffer by using the Present member of the DX10 swapchain class.

Now think back to the depth testing tutorial where we displayed that cube and had to enable depth testing for it to render properly. Now a  cube is made up of 6 sides with 2 faces per side, so that is 12 triangles we have to draw for each cube. The graphical API draws one triangle at a time to the back buffer, and used the depth buffer to check whether it can overwrite a pixel in the back buffer if the new pixel to be drawn is in front of it. If this test passes then the API is given permission to overwrite that pixel’s value but its not as simple as that! Continue reading “DirectX10 Tutorial 6: Blending and Alpha Blending”

3D DirectX10 Free Look Camera (Timer based)

Introduction:

Okay so I promised I’d write a tutorial on writing a simple free look vector based camera, this tutorial doesn’t only apply to DirectX10 but to  pretty much any graphics API. We want to keep things simple initially so the simplest possible camera we can implement short of a first person style camera is a free look camera (without roll) so basically only two degrees of freedom: left/right and up/down, we are also going to implement some basic movement controls forwards/backwards and Strafe Left/Right. Continue reading “3D DirectX10 Free Look Camera (Timer based)”

My attempt at a DX10 game engine… Name Ideas

So I’ve started with developing the AI test bed for my masters experiments, and since I kinda wanted something that looked nice, I basically started developing a game engine without knowing it 😛

I’ve been working on it for around a week now, and have a very basic renderer and a basic camera system going… The next step will be developing the scene graph and spatial data structures needed for rendering. I’ve been doing so much reading on scene graphs and so one that it’s coing out my ears and yet I’m not any closer to having an idea on a good solution. I could probably do my entire masters on scene graphs and spatial sorting.

Anyways I’m going to discontinue my DX10 tutorials since all the future tutorials will anyways be based off of my engine, so I’m going to start a new series of tutorials on building a very basic dx0 game engine.

The amount of files in the projects are growing and I need to come up wiht a nice name so i can start encapsulating the classes in namespaces, and have a nice uniform naming across the components, since the engine is going to be super super simple i was thinking as using one of the following as the engine name:

  • Cimplicity
  • basikEngine
  • CimplEngine
  • SimplEngine
  • engineBasix

Any other suggestions?