Ok, I started this text on the small sketch book I have, and it was supposed to come in here, but I decided to finish it in here… so… yeah. I’ll start by translating what I have into here, and finish whatever I have too.
I can only start this post with admitting I’ve came this far in my life without having had the need to start an OpenGL environment on a C programming language application.
To be a realist, that’s only because of some paranoia I had on the times I refused to learn how to program, one of “how scary it could be” myth or something. Paranoia that persisted even after doing OpenGl tutorials in D programming language, having drawn 2D geometry (vector stuff basically) in Processing.org by pushing coordinates for vectors describing colored polygons in a “controlled” OpenGL environment, having drawn basic 3D geometry in PS2, and yet again colored geometry but this time on a Nintendo DS (which has a very OpenGL-ish implementation on the devkitPro compiler of C/C++ for various DSes).
But I’ve never actually had to initiate and serve myself of an OpenGL environment as my graphical interface of the game (only by second hand libraries that used OpenGL). So I’ve spent a couple of days chasing down and testing tutorials.
In fact this step took me more than a couple days because at first I tried to directly incorporate the OpenGL code into the old code I had which was SDL2. I’ve heard that you could use an OpenGL environment initiating the window for the application with SDL2. Didn’t went too well. I was already using many things of the SDL2 library inside my own game engine code. Inserting OpenGL into that code left me with too much code in each file, and too diverse and “exotic”.
I wanted to avoid having that for the sake of having to port this code someday, I would like to keep my game engine code files intact, and only change the machine interaction code files. Mobiles usually have ports of OpenGL themselves and C compilers, consoles usually have some OpenGL-ish API of themselves, and who knows, even Nintendo DS might have OpenGL stuff like. OpenGL was as low as I knew I could go right now.
So I started with a new project.
First Objective
To import the necessary libraries, initiate an OpenGL environment, render and moving a “Sprite” on the screen, using a “Sprite Shader”, all in a single code file. Those were necessary necessary tools for to render this kind of game.
However shader are part of this “new” OpenGL stuff, You have to code how you want the machine to render stuff… sumethin. All my past experiences with OpenGL environments were with the “old” OpenGL kind of interaction with a GPU. OpenGL knows how to render, you have to tell it what to render.
The “old” way you could very basically describe the a cube by typing, line by line (point one, point two, point three…) the coordinates to describe the geometry on 3 axis (X, Y Z).
What you have to do on the “new” OpenGL (but you could actually also do this on the first), is sending to the OpenGL
not one point per line, but with one line of code send a good chunck of coordnates, numbers, that describe a bigger piece of geometry.
I.E: I want 10 sprites That is 2 triangles per sprite, 3 coordinates per triangle, 3 axis per coordinate, 1 number per axis, which is 2 x 3 x 3 x 1 = 18 numbers = 1 Sprite. 10 Sprites = 10 x 18 = 180 number to describe the geometry necessary to display 10 Sprites.
You can even send those 180 numbers at the start of your program to the OpenGL buffers, and just ask them to render without touching the number again (like a tile map, after describing the tile grid geometry and which images go into each square, you only need to render them without updating them).
The “thing” with “old” OpenGL is that you could start a basic environment, and by writing a few lines of code you have “stuff” showing up on the screen. With “new” OpenGL you have to set up the rendering pipe. From the point where you have buffers for the points of the geometry on your own, you load them once per program or once every frame, and you have to program two shader programs. One to process the geometry that you send, and another to tell to the “window” screen which colors go where. Those are called Vertex Shader and Fragment Shader respectively.
After I had some square drawn on the window, I had to start thinking on how to describe a Sprite.
At a geometric level a sprite is a rectangle composed of 2 triangles, and turned to an orthographic camera.
You get an orthographic camera matrix with the measures for the screen (you get code on the webs, or do some trick switching GL versions and letting “old” GL do the maths). Then on the Vertex shader you multiply that matrix by the points describing the triangles that you sent to the buffer (imagine you are programming math for each point individually).

This would be the order in which you send and process each of the points from a sprite geometry. If I used “old” OpenGL and I could order it to render GL_QUADS. It’s rectangles with the convenience of only needing 4 coordinates.
However I haven’t seen any QUADS on this “new” OpenGL so I’ll have to draw 2 triangles, so the order for the points is the following:

If the triangle’s vectors are ordered counter-clock-wise, the image we chose to map on this geometry (if it is pixel perfect) will show up as you see it on a editing program. If you order the vectors clock-wise you would see the same image flipped horizontally. Playing a bit more with the order we give to the geometry or texture mapping, we can achieve vertical flipping, and flipping in both directions at the same time.
On the Fragment Shader you paint the geometry as you like. You can simply paint it red, or paint that geometry with an image, or part of an image. That image is called Texture and you map it from 0 to 1 on both X and Y axis.
So we basically need 6 vectors of 5 numbers each to describe a Sprite with geometry:
- 1st Numb.: Screen X coordinate
- 2nd Numb.: Screen Y coordinate
- 3rd Numb.: Texture X coordinate
- 4th Numb.: Texture Y coordinate
- 5th Numb.: MAGIX… pay him no att.
OpenGL is not supposed to understand vector that are bigger than 4 numbers, but I can create various buffers with different attributes for each vector. That’s saying a buffer for the screen coordinates (1st, 2nd), another for the texture coordinates (3rd, 4th, 5th). Then I push those buffers into OpenGL and tell it to render them as triangles with a texture attached.
BTW: between attaching the texture, and pushing the points, you have to set a shader to render… You can also set it at the start of the game, but then not change it every again. You might want to change shaders mid-game or mid-frame.
##END of First Objective
##Second Objective
So I got to render a sprite, also prepared the abstract structure for the Sprite Object (a box of code that holds numbers for you, and you can do stuff with it). Now I had to get a Sprite Renderer that would automate the process for rendering a great number of Sprites. Structures describing Texture and Shader Objects were also useful so I did them. So I could already advance into integrating this new way of rendering into my road’s code.
I also did some structures to condition the opening of the SDL2 window, and setting of the OpenGL environment with Shaders properly.
Sorry for being extent, but I need to keep this explanation here.
The Sprite Renderer is the concept you apply for how the machine will receive the information that describes your geometry. Depending on how fast you send things, and how you place the OpenGL calls that are slow, you can strongly tweak the quantity of sprites you can have on screen, even in slower machines.
The process is [Bind Texture] > [Tell OpenGL to use a shader] > [Push the Vectors] > [Order to Draw the Triagles].
- [Bind Texture] - Relatively slow
- [Tell OpenGL to use a shader] - Mildly Slow
- [Push the Vectors] - depends on how many sprites you have to render, but it’s fast on what it does.
- [Order to Draw the Triagles] - Probably the slowest.
I plan to attach 3 textures to the GPU to serve as Sprite Atlases, that I will edit and upload again at any giving time.
Textures are attached at given times (like arriving the next level, loading part of the atlas that is the level’s sprites), and only once after edited.
I plan to use various Shaders, but render as much geometry as I can from each one of them, so this won’t have many calls.
I kinda “have to” upload the geometry of each sprite every frame, but this is faster than you think if you do it in big bundles.
The order to draw the triangles I will try to do after I have pushed all the vectors from every Shader geometry buffer I have. If this doesn’t work well, and the order to draw only uses the last assigned shader, I’ll have to make a Draw call for each shader, so I will definitely want to push as much geometry as I can per shader because this call is much slower than pushing vectors.
End of explanation
After having created this new graphics interaction layer, similar and inspired by SDL rendering and GameEngine2D on the PSM DevKit used to make the Alpha version of Oh, Deer! on the Playstation Vita, I incorporated the “road” code I had done on the previous project.
It was easy to get each Road’s Segment (each different colored “segment” of road) rending as a sprite with at least the correct size and scale.
Funny results. Geometry has the shape of a rectangle (lots of width, almost no height) per each segment. That’s why the lines of the road are pointing UP. They need to be trapezoids. However the results if you set the geometry with the right shape of a shorter top than bottom, I get this:
I had this same problem the first time on Vita. After changing the rendering of the road to polygon segments, the textures came out like this. Very long story short, when you shape a rectangle as a trapezoid, the texture will skew.
With GLSL, you need to fix that on shader. I’ve discovered before that using a 4th coordinate for the texture position description would tell OpenGL to fix this. But I’ll be using the same method as I used on the first version.
A Q coordinate is calculated per texture coordinate (that 5th Number, MAGIX). This coordinate is a relation of each point to the center. I do all the “relation between each vector” math on my game code because the Vertex Shader works with each Vector alone, and to those too I have to apply some minor maths. And then on the Fragment Shader I only have to do a division that I imagine if finalizes the correction of the perspective of the trapezoid.
Meaning I get this:
I also took a piece of code that helps me “extend” the geometry and the texture coordinates of each Segment to match the window’s borders. Also came from my previous Vita version’s code. This makes the field where the road is complete. But this stretches the last pixel of the texture all the way to the border of the screen.
It doesn’t look very pretty, but it will look slightly better when I do intermittent colors for those segments that are very far. No different height lines on the horizon does help a bit.
I could probably with relative ease put the sprites on the road that “create the design” of each level.
On the Vita version I called them “Billboards”, because they are always facing the viewer. Each would have a sprite, and a position in the “world”. It will eventually have a collision box, and they were even able to move them on the 3 axis (horizontal, vertical, and depth) on the Alpha Version of the game. Unfortunately We’ve only used those features for the blood from exploding deer, and the waves of the starting level.
Deer were able to move, even have cycles of movement. But I guess I ended up doing the scripting language too dense for anyone to be able to design those into the levels.
In the meantime I got this:

Big Version click here.
##END of Second Objective
##What’s Next
So now I’ll think about creating one last layer of machine communication. The Frame Buffers are basically Textures where you render your screen, that are not your screen. You render to the screen, keep it, and then render it on the screen on it’s own geometry.
When you render this Texture of the screen on the actual screen, you use a shader, and that means I will be able to do the full screen visual effects that I did on the Vita version, using fragment shaders.
Afterwards I’ll build the structure around importing and loading sprites and road designs into the “game”.
That will leave me with a build that will most likely hold the levels from the Vita version. I will also have to find a way to load atlases while in-game. I did that on the Vita version with copying line by line the new atlas into the memory with the left over Milliseconds I had from rendering each screen.
Again I ended up doing a very extent post, and not many during various days… Oh well.
Sorry about that.