Henry Chronowski

Portfolio


Project maintained by henrychronowski Hosted on GitHub Pages — Theme by mattgraham

Home

Projects

About

Vulkan Renderer

    This is a project that I am actively working on with Ethan Heil with the dual goals of further learning the Vulkan API and improving our skills and knowledge of graphics programming. At the project’s current stage the renderer loads a hard-coded 3D .obj file of a Utah teapot and maps a texture to it, with a controllable camera, Phong lighting, and a wireframe rendering mode. Additionally, Ethan has integrated ImGui in order to create a debug interface, allowing modifications of uniforms and other variables during runtime in addition to a basic scene hierarchy. From here we would like to implement more advanced lighting, eventually working up to PBR with global illumination as well as ray and path-traced rendering modes, and additionally add the capability for multiple objects and a skybox. Below I go over the parts that I had primary responsibility for and challenges I faced; Please visit Ethan Heil’s writeup to see his extensive work on the project. Apologies if sections seem somewhat disjointed, this was written over the course of several months and I have yet to find the motivation to rewrite it into a more cohesive writeup.

Current Work

    At this moment we are (yet again) refactoring the codebase in order to be more suited to the development of a material system.

Model Loading

    Having attempted to write a robust file parser for 3D object files before I elected to make use of a couple common libraries for this application, utilizing tinyobjloader for parsing .obj and .mtl files and stb_image for texture parsing. Although these have made focusing on the actual graphics engineering side significantly easier I would like to revisit this at a later date and build custom parsers for the renderer. The main challenge that I ran into throughout this process was figuring out how to get the data in the exact format that Vulkan needed (This is a very common theme here). Once I figured out the proper method to get the data from the tinyobjloader structures into what Vulkan was looking for it was just a matter of doing it the proper number of times. I will note though that my current implementation does not check for duplicate vertices, which is something I aim to work on in the future.

General Program Architecture

    A large goal moving out of our first pass with this renderer was to fully restructure the architecture of the project. Initially as an aspect of both our laziness and the sources which we were referencing this renderer started out as a single file program. This is, to put it mildly, horribly unsustainable and confusing for a large, complex project. As a result we had to sit down and puzzle out a usable and extendable architecture before we were able to continue building off of our work. One of our main objectives with this architecture was encapsulation without abstraction, having objects and systems that interact without knowing exactly the content of each other while maintaining logical division of data. This arose both out of commonly understood good practice and both of our experiences working in other custom frameworks where even basic data types were abstracted to no end, which while it does have demonstrable benefits in certain situations can make work difficult to follow at the best and painful to comprehend at worst. The issue with implementing an architecture of this nature was that our initial work was very much intertwined, occurring all within a single file, meaning that a great deal of functionality had to be broken out in order to ensure scalability. After a bit of bargaining we realised that the only way to deal with this was to simply sit down and rebuild most of the program, and we set out doing just that over the course of a day or two. What we ended up with is our current architecture, where most of the encapsulation is based on functionality and a focus on building a scene hierarchy with a component system. Is it a paragon of good practice? No. Does it get the job done? Certainly. We will simply have to continue monitoring our solution and refine it to fit our evolving needs as they arise.

Phong Lighting

    I have finally achieved what was initially my bare minimum goal in this project of implementing the Phong lighting model, with lighting data being passed through a separate uniform buffer, allowing for runtime modification of lighting data on the CPU. Getting to this point has taken more work than I thought possible in my wildest dreams, but has been more enlightening as to how to best utilize Vulkan than anything else I’ve worked on.

    My initial attempt at implementing lighting relied on a simple Lambert shader, passing data from the CPU as uniforms. This did not work, with shadows being horribly misplaced for the location of the light despite the shader logic being tested on a separate known working framework.

    After exploring many different avenues and reading hundreds of lines of documentation, I discovered a small aspect of Vulkan that in my inexperience I have yet to run into working in other frameworks: Memory alignment of buffers. This was discovered after several hours of bugfixing ended with us adjusting the position of the light and the color changing, followed by horrified screams. The issue on the backend was that the uniform buffer was not packed properly and had a Vector 2 that was not properly aligned. This results in the shader receiving the right data in the wrong memory location, even though it is set accurately on the CPU. The way to solve this is simply packing the memory following Vulkan’s specifications, mostly by using alignas() in order to ensure that all variables sent were aligned on the 16. Even though this seems like a small thing it held up our implementation of lighting for weeks, but in practice it’s these kind of bugs that are so hard to diagnose and so important to learn from, because the program is just doing what you’re telling it to. Dealing with memory alignment specifically is a pain, even more so in the context of data being sent between the CPU and GPU, because there are not many tools to debug shaders in the way that we are accustomed to being able to examine the memory of a program running on the CPU during execution. As a result, debugging has to take a more roundabout approach, often with more trial and error, and the individual programmer’s experience must be used to eke out the information they are looking for.

    Dealing with this issue more or less encapsulates my experience with Vulkan though. As I am sure any naturally curious programmer would, after realizing and resolving this issue I had the immediate question of why the memory was structured in this way, rather than in a way that protects the programmer from unknowingly making this mistake. When it comes down to it, as with almost everything else in Vulkan, the need to specifically align your buffer memory is a tool to squeeze out every last tiny miniscule bit of optimization that you can from your program. When you have the capability to specifically align how your variables are stored in memory according to Vulkan’s standard, you can orchestrate their positioning to minimize the size of the buffer by doing stuff like using a 4 byte scalar to pad out a 12 byte vector 3 in order to align the memory by 16. I know this doesn’t sound like much, but Vulkan is all about cutting out every single thing that you don’t explicitly need, no matter how miniscule, and having minute control over the functioning of your software. In this modern graphics landscape a 16 byte difference in the size of a buffer can be a huge deal when working with computationally hungry realistic rendering techniques like path tracing. With other APIs you don’t always have the minute control necessary to have these optimizations, and this can be the difference between an application being capable of real-time rendering and not. Placing this control in the hands of the developer is one of the main reasons that through this project I am growing to love Vulkan.

Future Plans

    If we are being honest we would like to explore every corner of graphics programming with this renderer (although we personally have different main interests), but we are only human so we will have to settle for less. Our eventual realistic goal is to have an application similar to something like Shadertoy, where users are able to plug their shaders in and play around with settings, seeing the results rendered. Personally, I have chosen this goal as it allows us a way to demonstrate and expand our graphics knowledge while providing a tool that may assist others in doing the same. As far as immediate next steps go, I would like to further refine our architecture in order to ensure scalability, implement vertex deduplication, and move from Phong to more advanced lighting. Building a material system is one of the first steps we are going to take to improve scalability as it is one of the first steps on the path to easily rendering an undetermined number of objects.

    With the knowledge that I have and continue to gain through this project, I hope that I am able to become a better and more efficient graphics engineer for whatever software I work on. Getting to know an API at such a granular level as building a project from scratch with it has really helped me to learn the little tips and tricks that take so long working with it in context. I have found the level of control that Vulkan allows almost intoxicating, and I can’t wait to put this new knowledge to use.

Working from a Tutorial

    As you may have noticed, we started this project working from an excellent tutorial by Alexander Overvoorde, which has had its benefits and it’s curses. Once we get a little farther in our efforts I would like to give my opinion on this in the context of our project, which may take the form of a blog post rather than an addition to this, so keep your eyes open.


Technical Details

Henry Chronowski