project_GLaMRE log 01

Welcome back! Last time we discussed the humble origins of Project GLaMRE, and how it was inspired by a series of C#/WPF practice projects from the book Computer Graphics: Principles and Practice. This will be a continuation of the last post, so if you'd like to get fully caught up, you can do so here.

For the C++ engine, the goal was to use it for all the current and future 3D projects in the 'Computer Graphics Principles and Practice' book, as well as to recreate all the previous 2D projects from the book. This objective gave me a sizable to-do list of features to add or create. It was a finite list with a clear starting point, and I simply added whatever feature seemed logical next. Here's a list of things I knew I would need, some of which were in the old engine, and others were new features that the old engine might have struggled with:

  • Rendering a series of 2D and 3D primitive shapes (triangles, cubes, spheres, cylinders, etc.)
  • Phong Lighting/Shading
  • Directional light source (Light emanating in a single direction, like the sun)
  • Point lights (Like a light bulb)
  • Methods to easily Rotate/Scale/Transform models/meshes
  • Textures for models
  • GUI Elements (Buttons/Sliders/Text Fields)
  • Ability to import and export model data in .obj format
  • Receive Mouse/Keyboard input
  • Controllable camera

I’m happy to say the C++ engine in its current state has all this functionality and then some (a few of these features can be seen in the current site banner, which was created with the C++ engine.) Theoretically, it's already capable enough to handle recreating all the previous projects I showed in last post, which will be my next goal. I imagine there are some more obscure features I forgot to add, but I should stumble across the need for them as I work on the old projects. Then the robot camel project (that I also mentioned in the first post) will be next, which already has me thinking about how I'm going to implement 100% new (to me) features like key-frame animation. At that point, I will probably jump back into reading/studying the book and really take my time with new concepts.

            Animated site banner; rendered in the C++ engine. 


I just want to say, I like C++ a lot more than C# because it's low level and very speedy as a result, but it’s been 7-8 years since I've used it regularly. I had a somewhat intermediate knowledge of C++, and I was a “supplemental instructor” for the Data Structures and Algorithms class at my university, taught in C++. So I had a strong background, and switching to C++ also allowed me to work in a fully Linux dev environment, which was exciting and a massive relief. That being said, I am still pretty rusty, and I know C++ has changed somewhat over the years, so I’m sure I will have to go back and refactor code I've written, am writing, and will write, multiple times in the future. Despite that, getting back into the swing of things, learning and relearning about C++ has been great fun and very rewarding.

Since the initial goal with the new engine was essentially to get caught up with the old engine, a lot of it felt like retreading familiar ground, so I was a bit impatient and eager to get the ball rolling. Because of this, I felt like taking my time was less necessary, as I didn’t need to learn a lot of new concepts; I had already covered that with the first engine. At the same time, I had the foresight to know that this could potentially be a long-term project, so I aimed to do things with C++ best practices in mind and (a little bit) less chaos than before.

I decided to heavily utilize AI to speed up this process. This was my first time using GitHub Copilot, and it was pretty incredible. I can't say for sure, but I feel like it helped speed up this project by 5-10x. It was particularly useful in the beginning phases of the project because much of the code I was writing was generic boilerplate OpenGL code. I was able to start typing the name of a function or class, and it would autocomplete the entire function. Of course, AI isn’t perfect, and as the project got more complex, this approach required a lot of additional troubleshooting and debugging.

But with the help of AI, I was able to create an OpenGL 'Hello World Triangle' application in record time. It's just drawing a simple triangle on the screen, but this marked a pivotal moment, placing me at a similar starting point to where I had been with the original WPF engine. My confidence in this project grew a lot from this particular milestone, and I think it was around this time that I came up with the name "Project GLaMRE." Now equipped with the capability to draw a triangle with vertices at arbitrary points, my next task was to project 3D model data onto a 2D screen and render 2D triangles to create the illusion of 3D graphics, just as I had previously done in the WPF engine.  However, this task was much easier said than done. Working with OpenGL introduced a multitude of complexities and hurdles, which I look forward to discussing in detail in future posts.


Hello World Triangle program, AKA the humble beginnings.
 
To give you a sense of the progress made since the initial triangle program, consider this: it began with just 100 lines of code to render that simple triangle. Now, I would estimate the project to be around 3000-4000 lines of code.
 
Going back to the topic of AI, I would say the biggest place it helped save time was in its ability to recognize and predict patterns in the code I had already typed. For example, when coding the logo for this blog, I started by creating a Model class object for each of the letters that make up 'Project GLaMRE.' I manually started typing out the line of code for the first letter 'P', naming the variable 'projectWord_P'. The reason for this name is not only to be (somewhat) clear to me and any other human that reads the code, but it is also intended as a context clue for Copilot. It was able to read the variable name and determine that this is going to be a Model for the letter 'P' in the word 'Project'. I had also created a cube model in the lines immediately preceding this, providing additional context on how my Model class constructor works. So, when I typed just the class and variable name

Model projectWord_P 

it autocompleted the line to be:

'Model projectWord_P = Model(modelDirectory / "P.obj", phongFlatShaderSource);'

This was incredibly close to what I needed. By simply updating the filename to 'bld_logo_P.obj', the actual file name for my 'P' model, I gave the AI more context. It now understood how my Model class constructor worked, recognized that I was creating a model for the letter 'P', and grasped that it might be part of the word 'Project'. Additionally, it learned my preferred file naming scheme of "bld_logo_<letter>.obj." With this initial setup, the AI could take over: when I moved to the next line, it accurately autocompleted the code for the subsequent letters 'R', 'O', 'J', and 'E'.

GitHub Copilot in action; auto-completing lines of code that would normally be tedious to type out.
 
 
Now it wasn’t smart enough to automatically know that the next word I wanted to make would be 'GLaMRE', but all I had to do to prime it for this was to change one of the previous lines to:

'Model glamreWord_G = Model(modelDirectory / "bld_logo_G.obj", phongFlatShaderSource);'

And then it was able to predict that I’m about to create letter models for the word 'GLaMRE', and all I had to do is let it autocomplete for me. Before Copilot, I was copying and pasting the same line and manually changing the letter/filename, which was tedious and time-consuming, especially when compared to how fast and automatic using Copilot is.

I’d also like to talk a bit more about how I designed the current logo/banner for the blog, as it’s something I made recently in the current version of the C++ engine. For fun, practice, and style, I decided to design my own font by sketching out the letters for “PROJECT GLaMRE” on graph paper, and marking vertex coordinates.


Font design sketches & coordinates.


Next, I wrote a simple program that lets me manually input these coordinates, which then outputs them in an .OBJ file. These files can be imported into both my engine and applications like Blender. Despite my careful calculations, I occasionally encountered miscalculated vertices in the triangles. Fortunately, having everything meticulously graphed and labeled made identifying and fixing these errors straightforward. Initially, the imported coordinates appeared flat, akin to a piece of paper, as they were only 2D. To achieve the 3D effect I envisioned, I imported each letter into Blender and extruded their faces. This process transformed them into three-dimensional forms. Following this, I exported them from Blender and imported them back into my engine, where I added color and shading effects. In the future, I aim to develop features in my engine for easy face extrusion. For now, using Blender for this task doesn't feel like cheating, considering the effort I put into ensuring my programs are compatible with .OBJ model data for both input and output.
 

Using blender to make my 2D letters 3D.


What even is normal, anyway?

I didn’t directly choose the color scheme for the banner, the color scheme chose me. Just kidding, I just mean I didn’t pick each color individually. Instead, the colors are based on the direction each triangle in the polygon is facing, known as its 'normal vector.' A normal vector is like an invisible line perpendicular to the triangle’s surface, showing which way it faces.
 
To get a better idea of how this works: imagine a hollow sphere covered with a gradient of all possible colors on its inner surface. Now, picture a flat triangle placed inside this sphere, right at the center. Attached to the triangle is an arrow that extends outward from its middle, pointing in the direction the triangle is facing. Where this arrow touches the sphere determines the color of the triangle’s surface. For instance, if the arrow points to a blue section of the sphere, the triangle turns blue. Slightly tilting the triangle results in the arrow pointing to a slightly different color on the sphere, changing the triangle's color accordingly. This method illustrates how the direction of a normal vector (represented by the arrow) influences the color of a surface in a 3D model. As the model rotates or tilts, the colors shift subtly, creating a dynamic and visually striking effect.
 
 
Rotating spirally sphere using reversed normal values for shading. There's a smooth gradient of colors because all adjacent surfaces have similar normal vectors.
 
For example, in a 3D space, a vector that only has a Y value (0,1,0) indicates the surface is facing upward, and one pointing to the positive X axis (1,0,0) means it's facing right. Normal vectors aren't always pointing straight in one direction; each coordinate component can have values -1 to 1, which allows for a full range of possible directions. However, a normal vector is always normalized, meaning its length is exactly 1. This normalization doesn’t mean the X, Y, and Z values sum to 1, but rather that the square root of the sum of the squares of these components equals 1. For example, a normal vector with values (0.5,-0.5,0) points down and to the right, and its length is 1 because 0.5^2 + -0.5^2 + 0^2 ​= 1. Similarly, a vector with points (-0.1, 0.3, 0.6) also has a length of 1. The key point is that each component of a normal vector can range from -1.0 to 1.0, and the vector is normalized to have a unit length (meaning the length is exactly 1). If you think of these X, Y, Z values as R, G, B color values instead, they give you a unique color for each surface (if a normal vector was facing in the direction of (1,0,0) the resulting color would be pure red with no green or blue influence.) This means adjacent faces, which typically have slightly different angles, will have slightly different normal vectors. As a result, you get a smooth gradient of colors across the surface, with the colors changing subtly based on the model's orientation.
 
This method of using surface normals for coloration is what I applied to the letters and shapes in the logo. The notable difference between the 'PROJECT' and 'GLaMRE' lines is that for 'PROJECT,' I used the negative of the surface normals. This creates a contrast with the 'GLaMRE' line, where I used the normal normals (pun intended). Additionally, for the 'PROJECT' line, I implemented smooth shading. This technique allows the colors on different sides to blend together, creating a "seamless" gradient across each model's surface. In contrast, the 'GLaMRE' line features flat shading, which gives each surface a uniform, solid color.
 

Hovering UFO model using "normal" normals and flat shading, the background is a rotating cube with a spiral galaxy texture. 

Same UFO model, but with reversed normal values and smooth shading.

I discovered this captivating effect early in the project while resolving issues with loading and displaying basic 3D models. Having a smooth gradient across a model's surface is a valuable visual cue for pinpointing rendering problems. My first encounter with this phenomenon occurred when working with a magnolia model I found here, while testing compatibility with models from external sources. The model had a very low polygon count, which meant there were fewer adjacent surfaces that had similar normal vectors, leading to a lack of smooth color gradient across its surface. Surprisingly, this didn't detract from its visual appeal. Instead, it created a stunning mosaic of shifting colors, especially noticeable when the model was tilted and rotated. This is what I adore about graphics programming: the unexpected beauty that emerges from glitches and bugs. It's moments of accidental discovery like this that keep me going and curious, where a mistake can transform into a unique and personal artistic gem.

                
 
 
"We don't make mistakes, just happy little accidents."
    - Bob Ross

Popular posts from this blog

project_GLaMRE log 00