Software/Hardware Rasterizer

Description

After having made a custom Ray Tracer from scratch, this Rasterizer was its follow-up on my graphics programming journey. The main focus of this project was getting familiar with the rendering pipeline and understanding the fundamental steps of it by re-creating it from scratch. Different topics compared to the Ray Tracer were introduced here.

As a follow-up, the same Rasterizer was made in the DirectX11 framework and has the functionality to toggle between the software and hardware Rasterizer. In my free time, I’ve profiled and refactored the project and made it possible to support multiple lights as well as a PBR implementation of the Cook-Torrance BRDF (as seen in my Ray Tracer project).

Topics Covered

Though with all the progress, this Rasterizer can definitely be expanded upon in terms of optimization or extra features such as Multithreading, Indirect Lighting, Reflections, Shadows, and Anti-Aliasing to name a few.

Most Interesting Code Snippets

Triangle Hit
bool Triangle::Hit(const Elite::FPoint2& pixel, HitRecord& hitRecord) const
{
    //Before anything else, if we have a NoCulling setting we get visual artifacts
    //That's why we're going to determine what side we're looking at from the get-go, and simply cull away the opposite of that
    ECullMode cullmode = m_CullMode;
    if (cullmode == ECullMode::NoCulling)
    {
        //Calculate normal for this triangle (reference: raytracer normal pre-calculation)
        FVector3 v0_v1 = FVector3(m_InputVertices[1].Position - m_InputVertices[0].Position);
        FVector3 v0_v2 = FVector3(m_InputVertices[2].Position - m_InputVertices[0].Position);
        FVector3 normal = Elite::GetNormalized(Elite::Cross(v0_v1, v0_v2));
        float dot = Elite::Dot(normal, hitRecord.ViewDirection);
        if (dot > 0)
        {
            //Means we're looking in the same direction as the normal -> backface normal -> so cull front
            cullmode = ECullMode::FrontCulling;
        }
        else
        {
            //Means we're looking in the same direction as the normal -> frontface normal -> so cull back
            cullmode = ECullMode::BackCulling;
        }
    }

    //Total area needed to decide the weight of a vertex later
    FVector2 a{ m_TransformedVertices[1].Position - m_TransformedVertices[0].Position };
    FVector2 b{ m_TransformedVertices[2].Position - m_TransformedVertices[0].Position };
    float totalArea = Cross(b, a);
    std::array<float, 3> weights;

    //Check first edge
    FVector2 edgeA{ m_TransformedVertices[1].Position - m_TransformedVertices[0].Position };
    FVector2 toPixel{ pixel - FPoint2(m_TransformedVertices[0].Position) };

    //Inside-outside test
    //Cullmode integration (tried doing it the same way using dot products like in raytracer -> results in artifacts)
    //-> Figured it could have to do with the order of how vertices are defined (counterclockwise vs clockwise)
    //-> On discord people explained how in the inside outside test:
    //-> We can check the signs of the cross products along with the cullmode to determine if it should be visible or not
    float signedArea = Cross(toPixel, edgeA);

    //Cullmode check
    if (cullmode == ECullMode::BackCulling && (signedArea < 0)) return false;
    else if (cullmode == ECullMode::FrontCulling && (signedArea > 0)) return false;

    //Set weight of v2
    weights[2] = signedArea / totalArea;

    //Continue with next edge
    FVector2 edgeB{ m_TransformedVertices[2].Position - m_TransformedVertices[1].Position };
    toPixel = pixel - FPoint2(m_TransformedVertices[1].Position);

    //Cullmode check
    signedArea = Cross(toPixel, edgeB);
    if (cullmode == ECullMode::BackCulling && (signedArea < 0)) return false;
    else if (cullmode == ECullMode::FrontCulling && (signedArea > 0)) return false;

    //Set weight of v0
    weights[0] = signedArea / totalArea;

    //Continue with last edge
    FVector2 edgeC{ m_TransformedVertices[0].Position - m_TransformedVertices[2].Position };
    toPixel = pixel - FPoint2(m_TransformedVertices[2].Position);

    //Cullmode check
    signedArea = Cross(toPixel, edgeC);
    if (cullmode == ECullMode::BackCulling && (signedArea < 0)) return false;
    else if (cullmode == ECullMode::FrontCulling && (signedArea > 0)) return false;

    //Set weight of v1
    weights[1] = signedArea / totalArea;

    //Interpolated values
    hitRecord.InterpolatedZ = GetInterpolatedDepthInSS(weights);
    hitRecord.InterpolatedW = GetInterpolatedDepthInVS(weights);
    hitRecord.InterpolatedColor = GetInterpolatedColor(weights, hitRecord.InterpolatedW);
    hitRecord.InterpolatedUV = GetInterpolatedUV(weights, hitRecord.InterpolatedW);
    hitRecord.InterpolatedVertexNormal = GetInterpolatedVertexNormal(weights, hitRecord.InterpolatedW);
    hitRecord.InterpolatedTangent = GetInterpolatedTangent(weights, hitRecord.InterpolatedW);
    hitRecord.ViewDirection = GetInterpolatedViewDirection(weights, hitRecord.InterpolatedW);
    hitRecord.MatID = m_MaterialID;
    return true;
}
Vertex Transformations
void Triangle::TransformVertices(float width, float height, const Elite::FMatrix4& worldMatrix, Camera* pCamera, const KeyBindInfo& keyBindInfo, bool invertToRHS)
{
    //Since the vertices are parsed for DirectX (LHS) we have to revert it to work for our SRAS in RHS
    if (invertToRHS)
        InvertAttributesToRHS(m_InputVertices);

    //Gather camera variables
    const auto& cameraToWorld = pCamera->GetLookAtMatrix();
    const auto& projMatrix = pCamera->GetProjMatrix();

    //Model to WorldSpace -> to ViewSpace -> to Projection Space
    Elite::FMatrix4 worldViewProjMatrix = projMatrix * Elite::Inverse(cameraToWorld) * ((invertToRHS) ? Elite::Inverse(worldMatrix) : worldMatrix);
    for (int i = 0; i < 3; ++i)
    {
        //Copy over attributes
        m_TransformedVertices[i].Color = m_InputVertices[i].Color;
        m_TransformedVertices[i].UV = m_InputVertices[i].UV;

        //Transform normals and tangents in ONLY WORLD SPACE
        m_TransformedVertices[i].VertexNormal = FMatrix3(worldMatrix) * GetNormalized(m_InputVertices[i].VertexNormal);
        m_TransformedVertices[i].Tangent = FMatrix3(worldMatrix) * GetNormalized(m_InputVertices[i].Tangent);

        //Make world position and store view directions
        m_TransformedVertices[i].WorldPosition = worldMatrix * FPoint4(m_InputVertices[i].Position, 1.f);
        m_ViewDirection[i] = GetNormalized(m_TransformedVertices[i].WorldPosition - FPoint4(pCamera->GetPosition(), 1.f));

        //Model in homogeneous Space -> Clipping Space
        m_TransformedVertices[i].Position = worldViewProjMatrix * FPoint4(m_InputVertices[i].Position, 1.f);
    }

    //Simple frustum culling that culls away the triangle as soon as 1 vertex is out of the view plane
    if (keyBindInfo.UseSimpleFrustumCulling)
    {
        DoSimpleFrustumCulling();
        if (!m_IsInsideFrustum)
            return;
    }
    //3D Clipping applied on the triangle (only culls when all vertices are out of view plane)
    else
    {
        //Clear clipped triangles
        m_ClippedTriangles.clear();
        m_IsTriangleClipped = false;

        //For now I'm only testing clipping to the left side of the screen
        DoClippingX();
        if (!m_IsInsideFrustum)
            return;
    }

    //Only applies in case 3D clipping is applied -> will further transform all the newly made triangles
    if (m_IsTriangleClipped)
    {
        for (Triangle& t : m_ClippedTriangles)
        {
            //Further transformation of spaces
            for (int i = 0; i < 3; ++i)
            {
                //Clipping space to NDC space
                //Perspective divide! -> ViewSpace z value is now stored in the w-component
                float viewSpaceZ = t.m_TransformedVertices[i].Position.w;
                t.m_TransformedVertices[i].Position.x /= viewSpaceZ;
                t.m_TransformedVertices[i].Position.y /= viewSpaceZ;
                t.m_TransformedVertices[i].Position.z /= viewSpaceZ;

                //Screen Space
                t.m_TransformedVertices[i].Position.x = (t.m_TransformedVertices[i].Position.x + 1) / 2.f * width;
                t.m_TransformedVertices[i].Position.y = (1 - t.m_TransformedVertices[i].Position.y) / 2.f * height;
            }
        }
    }
    //Transform original triangle vertices in case no clipping was necessary
    else
    {
        for (int i = 0; i < 3; ++i)
        {
            //Clipping space to NDC space
            //Perspective divide! -> ViewSpace z value is now stored in the w-component
            float viewSpaceZ = m_TransformedVertices[i].Position.w;
            m_TransformedVertices[i].Position.x /= viewSpaceZ;
            m_TransformedVertices[i].Position.y /= viewSpaceZ;
            m_TransformedVertices[i].Position.z /= viewSpaceZ;

            //Screen Space
            m_TransformedVertices[i].Position.x = (m_TransformedVertices[i].Position.x + 1) / 2.f * width;
            m_TransformedVertices[i].Position.y = (1 - m_TransformedVertices[i].Position.y) / 2.f * height;
        }
    }
}
Render Loop
void Elite::Renderer::RenderSRAS(const std::vector<TriangleMesh*>& pTriangleMeshes, const MaterialManager& materials, const LightManager& lights, Camera* pCamera, const KeyBindInfo& keyBindInfo)
{
	SDL_LockSurface(m_pBackBuffer);

	//Clear depth and color buffer
	for (uint32_t r = 0; r < m_Height; ++r)
	{
		for (uint32_t c = 0; c < m_Width; ++c)
		{
			m_DepthBuffer[c + (r * m_Width)] = FLT_MAX;
			m_pBackBufferPixels[c + (r * m_Width)] = SDL_MapRGB(m_pBackBuffer->format,
				static_cast<uint8_t>(50),
				static_cast<uint8_t>(50),
				static_cast<uint8_t>(50));
		}
	}

	//For every triangle mesh
	const auto& pLights = lights.GetLights();
	for (TriangleMesh* pTriangleMesh : pTriangleMeshes)
	{
		//Check if valid
		if (!pTriangleMesh->IsValid())
			continue;

		//Gather data from triangle mesh
		const auto& indexBuffer = pTriangleMesh->GetIndexBuffer();
		const auto& vertices = pTriangleMesh->GetVertexBuffer();
		Topology topology = pTriangleMesh->GetPrimitiveTopology();
		size_t incrementValue = (topology == Topology::TriangleList) ? 3 : 1;
		bool swapOnOdd = (topology == Topology::TriangleList) ? false : true;

		//Start looping over all indices 
		for (size_t i = 0; i < indexBuffer.size() - 2; i += incrementValue)
		{
			//Create triangle
			Triangle t = (swapOnOdd && i & 1)
				? Triangle //Swap last 2 indices on odd triangle in strip
				(
					Vertex_Input{ vertices[indexBuffer[i]] },
					Vertex_Input{ vertices[indexBuffer[i + 2]] },
					Vertex_Input{ vertices[indexBuffer[i + 1]] }
				)
				: Triangle //Else continue making triangles from a list or even triangle in strip
				(
					Vertex_Input{ vertices[indexBuffer[i]] },
					Vertex_Input{ vertices[indexBuffer[i + 1]] },
					Vertex_Input{ vertices[indexBuffer[i + 2]] }
			);

			//Transform triangle
			t.TransformVertices((float)m_Width, (float)m_Height, pTriangleMesh->GetWorldMatrix(), pCamera, keyBindInfo, true);

			//If triangle already isn't valid, continue
			if (!t.IsInsideFrustum())
				continue;

			//Set cullmode for upcoming hit check
			t.SetCullMode(pTriangleMesh->GetCullMode());

			//If triangle is clipped, loop over the pixels surrounding that triangle and render
			if (t.IsTriangleClipped())
			{
				auto& clippedTriangles = t.GetClippedTriangles();
				for (const Triangle& clippedTriangle : clippedTriangles)
				{
					PixelLoop(clippedTriangle, materials, pLights, keyBindInfo);
				}
			}
			//Else loop over the pixels surrounding the current triangle and render
			else
			{
				PixelLoop(t, materials, pLights, keyBindInfo);
			}
		}
	}

	SDL_UnlockSurface(m_pBackBuffer);
	SDL_BlitSurface(m_pBackBuffer, 0, m_pFrontBuffer, 0);
	SDL_UpdateWindowSurface(m_pWindow);
}
Pixel Shading
Elite::RGBColor Elite::Renderer::PixelShading(const HitRecord& hitRecord, const MaterialManager& materialManager, const std::vector<Light*>& pLights, const KeyBindInfo& keyBindInfo)
{
	//Coloring in the returned result
	RGBColor finalColor{};
	if (keyBindInfo.UseDepthBufferAsColor)
	{
		//Use depth as gradient for color
		float depthColor = Elite::Remap(hitRecord.InterpolatedZ, 0.985f, 1.f);
		finalColor = { depthColor, depthColor, depthColor };
	}
	else
	{
		//If not using material, then use the color defined in the triangle
		if (keyBindInfo.UseMaterial)
		{
			//Get sampled texture color from texture matching the triangle
			Material* pMat = materialManager.GetMaterialByID(hitRecord.MatID);
			if (!pMat)
				return RGBColor(0, 0, 0);

			//------ Diffuse ------
			Elite::RGBColor diffuse{};
			float diffuseReflectance = pMat->GetDiffuseReflectance();
			if (pMat->UseDiffuseMap())
			{
				diffuse = pMat->GetDiffuseTexture()->Sample(hitRecord.InterpolatedUV) * diffuseReflectance;
			}
			else
			{
				diffuse = pMat->GetDiffuseColor() * diffuseReflectance;
			}

			//------ Normal ------
			FVector3 newNormal{};
			if (pMat->UseNormalMap())
			{
				//We define our tangent space
				FVector3 binormal = Cross(hitRecord.InterpolatedTangent, hitRecord.InterpolatedVertexNormal);
				FMatrix3 localTangentSpace = FMatrix3(hitRecord.InterpolatedTangent, binormal, hitRecord.InterpolatedVertexNormal);

				//Sample normal map
				Elite::RGBColor normalSample = pMat->GetNormalTexture()->Sample(hitRecord.InterpolatedUV);

				//An RBG Color goes from [0, 255] range, while we will need this in [-1, 1]
				//Sampled value is already returned in range [0, 1]
				normalSample = (normalSample * 2.f) - RGBColor(1.f, 1.f, 1.f);

				//Transform to tangent space
				newNormal = GetNormalized(localTangentSpace * FVector3(normalSample.r, normalSample.g, normalSample.b));
			}
			else
			{
				//This value is already normalized
				newNormal = hitRecord.InterpolatedVertexNormal;
			}

			//Data to store information while contributing every light
			for (Light* pLight : pLights)
			{
				//------ Irradiance ------
				Elite::RGBColor irradiance = pLight->GetCalculatedIrradianceColor(newNormal, true);

				//Decide what BRDF to use:
				Material::MaterialWorkflow workflow{ pMat->GetMaterialWorkflow() };
				if (workflow == Material::MaterialWorkflow::SpecGloss)
				{
					//------ Phong BRDF ------
					//------ Specular ------
					//Color
					RGBColor specColor{};
					if (pMat->UseSpecularMap())
					{
						specColor = pMat->GetSpecularTexture()->Sample(hitRecord.InterpolatedUV);
					}
					else
					{
						specColor = pMat->GetSpecularColor();
					}

					//Shininess
					float shininess = pMat->GetShininess();
					if (pMat->UseGlossinessMap())
					{
						shininess *= pMat->GetGlossinessTexture()->Sample(hitRecord.InterpolatedUV).r;
					}

					//Reflectance
					float specReflectance = pMat->GetSpecularReflectance();

					//Because of lightDir being initialized for a LHS in DX, we have to invert Z to get same effect
					FVector3 lightDir = pLight->GetDirection(hitRecord, true);
					RGBColor specularReflect = BRDF::Phong(specReflectance, shininess, -lightDir, hitRecord.ViewDirection, newNormal);
					Elite::RGBColor phongSpecular = specColor * specularReflect;

					//Adding up contribution
					finalColor += irradiance * (diffuse / float(E_PI)) + phongSpecular;
					
				}
				else if (workflow == Material::MaterialWorkflow::MetalRough)
				{
					//------ Lambert Cook-Torrance BRDF ------
					//Because of lightDir being initialized for a LHS in DX, we have to invert Z to get same effec
					FVector3 lightDir = pLight->GetDirection(hitRecord, true);

					//------ Roughness ------
					float roughness = 0.6f;
					if (pMat->UseRoughnessMap())
					{
						roughness = pMat->GetRoughnessTexture()->Sample(hitRecord.InterpolatedUV).r;
					}

					//------ Metallic ------
					int metallic = 0;
					if (pMat->UseMetalnessMap())
					{
						float metalSample = pMat->GetMetalnessTexture()->Sample(hitRecord.InterpolatedUV).r;
						if (metalSample > 0.5f)
							metallic = 1;
						else if (metalSample < 0.5f)
							metallic = 0;
					}

					//LambertCookTorrance BRDF
					RGBColor lamertCookTorranceBRDF = BRDF::LambertCookTorrance(diffuse, metallic, roughness, -lightDir, hitRecord.ViewDirection, newNormal);

					//Adding up contribution
					finalColor += irradiance * lamertCookTorranceBRDF;
				}
			}
		}
		else
		{
			finalColor = hitRecord.InterpolatedColor;
		}
	}

	//Call max to one on color overflow
	if (finalColor.r > 1.f || finalColor.g > 1.f || finalColor.b > 1.f)
		finalColor.MaxToOne();

	return finalColor;
}

CREDITS

Credits to Matthieu Delaere, a lecturer at Howest DAE for writing the base files (math library timer, color structs, SDL window). 

Software Rasterizer (from scratch)

Hardware Rasterizer (DirectX11)
– Fire effect done through partial coverage

Hardware Rasterizer (DirectX11) 
– PBR Version with a robot from 3D assignment (plus boxes)