Oh, Deer! Thread [CHANGED]

This will be the thread that I’ll be exposing the progress with the PC port for the Necrosoft’s game Oh, Deer!
The first ALPHA version for the Playstation Mobile platform was also programmed by myself.
This port is also a remake of the engine from the very slow, and very unstable language that was C# on half the resources of a PSVita, to a pure C code with accelerated graphics under your own PC.

For this project I will be using the SDL2 Libraries, with OpenGL. My hope is that not only I will achieve the maximum possible speed for the engine, but that will also allow me easy ports for all PC operative systems, and hopefully easy ports for other future platforms.

How can I put it? This is how I would have liked this game to have been programmed in the first place.

The engine itself is a Pseudo3D Racer game, pretty much like Outrun. However I must admit that I’m a bit lazy and instead of reading a pile of articles about the Pseudo3D subject and do the engine totally old-school. I do the shameless thing and I copy the code from someone else, namely Jake Gordon and his tutorial on how to build a Pseudo3D racer in JavaScript.

For learning this kind of stuff most people themselves to the Lou’s Pseudo3D page, but personally I think that article is for 2 audiences. Someone who is well versed in software engineering with a very deep knowledge of old arcade hardware, and for people who want to feel really smart after reading all that stuff even if they actually don’t understand any of it, and don’t even realize they don’t understand.

Still it was thanks to Jake’s code and explanations, and some in-depth explanations from Lou’s page, that I was able to create the first version of the game on a platform that wasn’t even supposed to support more than 250 Sprites on screen. We ended up with over 1000 sprites on screen and using geometry (trapezoids) to create the road.

Anyway, lets get on with this, and I’ll try to have as many “Good story bro” moments as I can as I go along.

I’ve edited this first post for the sake of having a proper introduction for the thread. The previous first post is exposed from this point onward without any modifications.
[OLD POST]
I’ve decided to change the first post for the sake of giving a proper

Do any of you guys know a way to program GUI PC applications with C Language (preferentially multi platform), that is convenient and comfortable a bit like Visual Studio Express? Either free or ridiculously cheap.

Thanks for the move.
First the why is this thread here.
I kinda fell this thread should be on the development category. But since it has a non direct connection to game development, I thought about testing the forum’s general flexibility. I am perfectly fine in getting the thread moved.
As a lame excuse, I see programming as writing. In fact, in portuguese culture/laws until a few time ago at least, all SOFTWARE was considered writing pieces. There by students can have illegal copies as legal, because a video game is a software, there for a writing piece, so as student I am premised to have a free copy for my studies (yay 1 year socialist government).

Anyway, I need to do 1, possibly 2 video game editor tools. Even if it is 2 of them, at least for the start of both, the code will basically be the same.
I’ve been working with C# for at least a couple of years. I’ve done a small number of minor content editing tools, and a decent bit of Unity’ s editor editing/enhancing. Still I remember my times of programming C as times of more “sense”.

Right now I’ve spent 2 days doing some recheck on my skills of GUI doing with Windows Forms. It’s been ok most of the time… libraries are well documented and plenty of stuff on the net to go around.
Still… I would really like an opportunity of going back to C code.

Last I did was install openFrameworks. Decided to give it a look to its GUI library. To me it sounds a bit too flashy, simplistic, and expensive. I was thinking it was a good idea to install the OF libraries, or at least having them around, because with the games I’ll probably end up using openGL. Thing is openFrameworks lays those resources ready for the pick, with wonderful openGL window on the 3 big operating systems. However, so does SDL.

The primary plan would be making a PC port for a pseudo3D Racer, outrun kind of graphics, medium level of features number (this subject I’ll probably discuss on the dev category). That means having part of the game engine embed within the editor, or being really good at making command prompt calls.

So yeah, hence the reason for questioning.
Google was already used, and ended up with not so comfortable, or too much preparing before the start doing.
I would rather start that step with a bit of asking around.

2 Likes

Qt5 is decent and cross platform and has bindings to everything if you’re just looking to make a GUI that does one thing. Dropbox uses it so they can have a single code base on Windows and Linux (I think they put more effort into the cocoa OSX app).

If it’s a game then unity or game maker and C# are probably your best bet, yeah

1 Like

Gtk isn’t super well maintained on Windows and OSX these days afaik

Like, I much prefer Gtk3 applications and desktops on Linux, but the “cross platform” bit isn’t really holding up (those sad old gimp builds are still Gtk2, installing pygtk on another platform is a nightmare, etc)

Thank you both, those were sorta the option I’ve heard about. Gave a try to GTK. But something when wrong with the dependencies. Qt I still haven’t gave it a real try.

@Felix the game will most definitely be C or C++, and has less complex as possible. Don’t want to go to far over future ports from central core stuff. The base principal of the game is in both simple, and I thing sticking to the bases seams like the good choice.

@heavgear, I am definitely looking for an entry point. I’ve heard the same about practica time… hence my situations. Still doesn’t feel natural, and I am kinda used working my mem alocks. The on-the-go pixel changes go wonderfully in basic code. I remember SDL having particularly easy ways to do thos efficiently. Achieving than on almost any C# based language is definitely heavy. I’m purposely leaving shaders out of the picture. also achieving that 60fps at least on the code part of the game, even if not on the graphics… so damn hard.

Still, I am doing an application, so I’m just checking options. if it doesn’t work, then I’ll get back to what I know.

lol… yeah am perfectly aware of that, I used to write full C code files in paper sheet before punching the keyboard.That is not exactly what was going on in here. This is a simple thread I’ve decided to create to think a bit of something back at home, but right now spending the weekend in the middle of the polish countryside. Weekend birthday party at country house of a friend of the person I live with.
Think it of casually think of work.

I am very well aware of the huge advantages of the standard C# libraries, compared to C… yeah, my first choice was exactly that. C# editor, C game engine. I really dont need more than a quick window with the game opening. But in many times, even or specially in Unity.
You are so far from the motion/flow, maybe because you are away from the machine. Whatever new age crap you can think about.

I was honestly interest to see if there are any other options. If the google and sb cannot instruct me more, then it does not exist. I’ll still try Qt just because I can. If I can get a good pixely character out of it on the same 2 days as with C#, then I’ll change.

To be honest, I only need windows versions of the editors =p.

Mods move to king of development pls

QT if you want cross-platform UIs, Unity if you don’t give a shit and want to produce something that other people can see sometime before you inevitably succumb to irrelevance and death momento mori

1 Like

Yeah heavy, don’t worry to much. I’ve passed there along the. This was simply to check on other’s opinion and experiences.
I have been using unity, but i don’t really like the experience. Feel like driving a ferrari on a toyota race most of the times…

Ok, I have it decided.
GTK# for making the editor multi-platform. Lets hope it works, I won’t give it much pressure from the start, just focus on the Windows version.
C and SDL to make the game engine. Stick with the basics.

.
I’m going with C# and GTK because @heavygear is really very much right about C# when it comes to creating applications. The utilities from classes and core libraries cuts too much work.
Also, applications are constantly creating and deleting windows, projects, saving files, what not… Basically controlling and managing data reads and writes through a GUI.
To be perfectly honest I could even just do the game in Unity to create the game data, fully editing the editor, and still use a C engine of my own design even to display on their editor. But programming the editor on Unity it’s not much different nor much more production than programing your GUI yourself with such easy tools as WinForms and GTK.
There is one last detail that is probably the most annoying thing in Unity, which happens both on their game engine design and editor design. Some times more often than other, you want to do something stupidly basic and the rules around Unity are so complex that you have to do a workaround that takes ours to study and implement, that that is infuriating.

.

For the game itself I’ll do C and SDL for a simple reason. I really don’t need much more exterior stuff than that, SDL provides me all the media access, and all the rest I need to do I would have to do it anyway on Unity or anything else.
The game engine is of my own design, and while in Unity I would have to find workarounds to get exactly the things I want, SDL provides me just the necessary basic tools for me to do what I know I need to do.
I won’t have to do any complex data processing, and I’m a faithful believes than in at least 70% or today games, and about 90% during and before PS2 generation games, can have all the necessary memory for the game allocated at the start of the application. I.E. you just have to define in a sensible way how much a “map” will need on memory, and then just edit that memory space to get the desired result. Define a limit and don’t be afraid about it, it’s your own rule. Also, you can always change the limit later and I advise to, so you can maximize the use of your memory within your system.

.

At the end of things, I’ll stick to the things I do know how to do. I hope that is the best “time efficient” decision I can make right now.

I’ll probably change the title of the thread later, if I am allowed to.
Good thing “the man” moved this thread here, because I would like to continue it as a development thread of a game project I’ll be on.
Thank you all those who helped me with this first step.

1 Like

As shameful as it can be, I actually forgot to post updates on this topic.
I’ll try to be more careful with that from now on. This is a good place to keep up with my own work, and not lose myself too much.

Anyway, a bunch of things were done to this point.
First it was the decision of programming language and libraries to use. I could have done what most people do now a days and go for Unity or some other generic game engine with understandable editor that exists now a days.
However… I never liked those. I always feel myself too far from actually be doing the workings of the game. And a few experiences with Unity did not changed that at all.

I am already a careful programmer, and one that doesn’t believe that video games have much of dynamic stuff going around (specially this older kind of games, you can basically start the skeleton for all the data at the start, and then simply edit it to show a specific level, character, enemy).
Also I found out that Unity has a tremendous difficulty on maintaining a stable framerate. Even using half a dozen garbage disposing tricks, there’s always a point where the framerate goes stupidly down… I was able to control this to a very big extent on previous tries, but it is simply too much stuff being created and destroyed that you simply don’t know. Because that’s the exact purpose of engines like that, to hide things from you.

Moving along, the stage of actually getting things running and showing pictures on the screen, taking inputs, and all those shenanigans was very easy. I had used SDL over a couple of times in the past, so all I did was having the Lazy Foo SDL2 Tutorials as a reference for whatever I wanted to do.
The SDL2 reserved some very interesting surprises, being now much easier to be sure you are using Hardware Accelerated Rendering instead of Software Rendering. But that choice is still there in case it is necessary in the future.

I started then doing the same I did on the PSM version of the game, trying to do a road using Jake Gordon’s code as base. As this is a known step for me, it was quite easy and very fast to obtain a simple straight road of segments.
I am for now doing this road with per-line rendering. As I mention on the first post, the PSM version used geography to render the road. This means that to create the “perspective” of the road I used trapezoid polygonal shapes, with a “straight” texture as road pretty much like this one:

For the per-line render of the road you use 2 texture of the road in perspective and depending on how far the road is you chose a “more distant” line from the source texture to paste on the screen. This source texture that I am using right now looks like this:

After getting the straight roads, I moved along with the curves and the hills. All good and fine. Took some time because this C code is much more simpler. So simple that I’m no longer used to thing in this simplistic terms, but sure feels nice… clean.
This produced some interesting errors as always:



After achieving the “expected” from a normal road, I proceeded to do one of the enhancements from my previous game engine. I think this is a technique that is used on game Outrunners, make the camera have the inclination of the road.
Basically when you have a ramp up, the camera points on that direction, and vise versa to when the ramp goes down.

To be fair I achieved this on the PSM version without much idea of what I was doing. So now I simply copied the code and all I got was a major clusterfuck. This took me a few days to achieve, but after an hour of conversation with a math teacher cousin of mine, I ended up saying something from my experience with video games: “The camera never moves in video games, the world does”. This is kinda of a dogma. Even if you do have a “camera object” that you move around beautifully like in Unity, the fact is that the camera is just a storage of values for transformations to bring the world to the screen.

That’s when hit me, that my previous code was probably moving the actual camera, because the matrix rotation and translation that I did was done after the step of “bringing the world to the camera”. I rethought my objective here which was for the camera and road angles to coincide, but focused on the player to maintain the car always on the same position relative to the screen.

Giving a small visual explanation, this is what a game like Outrun do:

This is what I “probably” achieved at the PSM version of the game, and what I was trying to achieve for 3 days:

And finally this is what I should have done in the first place:

The necessary to achieve this was first to translate the “point” on the road to the player (becoming the player’s position it’s 0,0 point), apply a very simple rotation matrix transformation, translate the point back by the inverted values of the player’s position, and after that I could do the projection maths normally involved in this type of games.

The difference in results is this, without “camera rotation”:

And with "camera rotation:

The con about this technique is that sometimes it looks like you’ll have a ramp going up, when actually that supposed ramp is leveled ground and you’re just going down a hill.
On the PSM version I “limited” this rotation to only a few degrees of inclination, which achieves the best of having camera rotation and not having it. You always watch the enough of road to watch where you’re going instead of having the road disappear on a down hill, and you still know if you’re going down or up hill because the camera never rotates too much.
I forgot to mention, this rotation is done around the X axis of the camera, meaning that I use the Z and Y axis as a plane, pretty much as a screen but where the Z coordinate is the X coordinate.

Ok, this should be enough for now. Quite a long post. I’ll try to post more frequently so I don’t have to spend so much time doing just one of these posts, and that way I’ll keep this thread more updated.

1 Like

Ok, I started this text on the small sketch book I have, and it was supposed to come in here, but I decided to finish it in here… so… yeah. I’ll start by translating what I have into here, and finish whatever I have too.

I can only start this post with admitting I’ve came this far in my life without having had the need to start an OpenGL environment on a C programming language application.
To be a realist, that’s only because of some paranoia I had on the times I refused to learn how to program, one of “how scary it could be” myth or something. Paranoia that persisted even after doing OpenGl tutorials in D programming language, having drawn 2D geometry (vector stuff basically) in Processing.org by pushing coordinates for vectors describing colored polygons in a “controlled” OpenGL environment, having drawn basic 3D geometry in PS2, and yet again colored geometry but this time on a Nintendo DS (which has a very OpenGL-ish implementation on the devkitPro compiler of C/C++ for various DSes).

But I’ve never actually had to initiate and serve myself of an OpenGL environment as my graphical interface of the game (only by second hand libraries that used OpenGL). So I’ve spent a couple of days chasing down and testing tutorials.

In fact this step took me more than a couple days because at first I tried to directly incorporate the OpenGL code into the old code I had which was SDL2. I’ve heard that you could use an OpenGL environment initiating the window for the application with SDL2. Didn’t went too well. I was already using many things of the SDL2 library inside my own game engine code. Inserting OpenGL into that code left me with too much code in each file, and too diverse and “exotic”.

I wanted to avoid having that for the sake of having to port this code someday, I would like to keep my game engine code files intact, and only change the machine interaction code files. Mobiles usually have ports of OpenGL themselves and C compilers, consoles usually have some OpenGL-ish API of themselves, and who knows, even Nintendo DS might have OpenGL stuff like. OpenGL was as low as I knew I could go right now.
So I started with a new project.

First Objective


To import the necessary libraries, initiate an OpenGL environment, render and moving a “Sprite” on the screen, using a “Sprite Shader”, all in a single code file. Those were necessary necessary tools for to render this kind of game.

However shader are part of this “new” OpenGL stuff, You have to code how you want the machine to render stuff… sumethin. All my past experiences with OpenGL environments were with the “old” OpenGL kind of interaction with a GPU. OpenGL knows how to render, you have to tell it what to render.

The “old” way you could very basically describe the a cube by typing, line by line (point one, point two, point three…) the coordinates to describe the geometry on 3 axis (X, Y Z).
What you have to do on the “new” OpenGL (but you could actually also do this on the first), is sending to the OpenGL
not one point per line, but with one line of code send a good chunck of coordnates, numbers, that describe a bigger piece of geometry.

I.E: I want 10 sprites That is 2 triangles per sprite, 3 coordinates per triangle, 3 axis per coordinate, 1 number per axis, which is 2 x 3 x 3 x 1 = 18 numbers = 1 Sprite. 10 Sprites = 10 x 18 = 180 number to describe the geometry necessary to display 10 Sprites.

You can even send those 180 numbers at the start of your program to the OpenGL buffers, and just ask them to render without touching the number again (like a tile map, after describing the tile grid geometry and which images go into each square, you only need to render them without updating them).

The “thing” with “old” OpenGL is that you could start a basic environment, and by writing a few lines of code you have “stuff” showing up on the screen. With “new” OpenGL you have to set up the rendering pipe. From the point where you have buffers for the points of the geometry on your own, you load them once per program or once every frame, and you have to program two shader programs. One to process the geometry that you send, and another to tell to the “window” screen which colors go where. Those are called Vertex Shader and Fragment Shader respectively.

After I had some square drawn on the window, I had to start thinking on how to describe a Sprite.
At a geometric level a sprite is a rectangle composed of 2 triangles, and turned to an orthographic camera.
You get an orthographic camera matrix with the measures for the screen (you get code on the webs, or do some trick switching GL versions and letting “old” GL do the maths). Then on the Vertex shader you multiply that matrix by the points describing the triangles that you sent to the buffer (imagine you are programming math for each point individually).

This would be the order in which you send and process each of the points from a sprite geometry. If I used “old” OpenGL and I could order it to render GL_QUADS. It’s rectangles with the convenience of only needing 4 coordinates.
However I haven’t seen any QUADS on this “new” OpenGL so I’ll have to draw 2 triangles, so the order for the points is the following:

If the triangle’s vectors are ordered counter-clock-wise, the image we chose to map on this geometry (if it is pixel perfect) will show up as you see it on a editing program. If you order the vectors clock-wise you would see the same image flipped horizontally. Playing a bit more with the order we give to the geometry or texture mapping, we can achieve vertical flipping, and flipping in both directions at the same time.
On the Fragment Shader you paint the geometry as you like. You can simply paint it red, or paint that geometry with an image, or part of an image. That image is called Texture and you map it from 0 to 1 on both X and Y axis.

So we basically need 6 vectors of 5 numbers each to describe a Sprite with geometry:

  • 1st Numb.: Screen X coordinate
  • 2nd Numb.: Screen Y coordinate
  • 3rd Numb.: Texture X coordinate
  • 4th Numb.: Texture Y coordinate
  • 5th Numb.: MAGIX… pay him no att.

OpenGL is not supposed to understand vector that are bigger than 4 numbers, but I can create various buffers with different attributes for each vector. That’s saying a buffer for the screen coordinates (1st, 2nd), another for the texture coordinates (3rd, 4th, 5th). Then I push those buffers into OpenGL and tell it to render them as triangles with a texture attached.
BTW: between attaching the texture, and pushing the points, you have to set a shader to render… You can also set it at the start of the game, but then not change it every again. You might want to change shaders mid-game or mid-frame.

##END of First Objective

##Second Objective

So I got to render a sprite, also prepared the abstract structure for the Sprite Object (a box of code that holds numbers for you, and you can do stuff with it). Now I had to get a Sprite Renderer that would automate the process for rendering a great number of Sprites. Structures describing Texture and Shader Objects were also useful so I did them. So I could already advance into integrating this new way of rendering into my road’s code.
I also did some structures to condition the opening of the SDL2 window, and setting of the OpenGL environment with Shaders properly.

Sorry for being extent, but I need to keep this explanation here.
The Sprite Renderer is the concept you apply for how the machine will receive the information that describes your geometry. Depending on how fast you send things, and how you place the OpenGL calls that are slow, you can strongly tweak the quantity of sprites you can have on screen, even in slower machines.
The process is [Bind Texture] > [Tell OpenGL to use a shader] > [Push the Vectors] > [Order to Draw the Triagles].

  • [Bind Texture] - Relatively slow
  • [Tell OpenGL to use a shader] - Mildly Slow
  • [Push the Vectors] - depends on how many sprites you have to render, but it’s fast on what it does.
  • [Order to Draw the Triagles] - Probably the slowest.

I plan to attach 3 textures to the GPU to serve as Sprite Atlases, that I will edit and upload again at any giving time.
Textures are attached at given times (like arriving the next level, loading part of the atlas that is the level’s sprites), and only once after edited.
I plan to use various Shaders, but render as much geometry as I can from each one of them, so this won’t have many calls.
I kinda “have to” upload the geometry of each sprite every frame, but this is faster than you think if you do it in big bundles.
The order to draw the triangles I will try to do after I have pushed all the vectors from every Shader geometry buffer I have. If this doesn’t work well, and the order to draw only uses the last assigned shader, I’ll have to make a Draw call for each shader, so I will definitely want to push as much geometry as I can per shader because this call is much slower than pushing vectors.
End of explanation
After having created this new graphics interaction layer, similar and inspired by SDL rendering and GameEngine2D on the PSM DevKit used to make the Alpha version of Oh, Deer! on the Playstation Vita, I incorporated the “road” code I had done on the previous project.
It was easy to get each Road’s Segment (each different colored “segment” of road) rending as a sprite with at least the correct size and scale.

Funny results. Geometry has the shape of a rectangle (lots of width, almost no height) per each segment. That’s why the lines of the road are pointing UP. They need to be trapezoids. However the results if you set the geometry with the right shape of a shorter top than bottom, I get this:

I had this same problem the first time on Vita. After changing the rendering of the road to polygon segments, the textures came out like this. Very long story short, when you shape a rectangle as a trapezoid, the texture will skew.
With GLSL, you need to fix that on shader. I’ve discovered before that using a 4th coordinate for the texture position description would tell OpenGL to fix this. But I’ll be using the same method as I used on the first version.

A Q coordinate is calculated per texture coordinate (that 5th Number, MAGIX). This coordinate is a relation of each point to the center. I do all the “relation between each vector” math on my game code because the Vertex Shader works with each Vector alone, and to those too I have to apply some minor maths. And then on the Fragment Shader I only have to do a division that I imagine if finalizes the correction of the perspective of the trapezoid.

Meaning I get this:

I also took a piece of code that helps me “extend” the geometry and the texture coordinates of each Segment to match the window’s borders. Also came from my previous Vita version’s code. This makes the field where the road is complete. But this stretches the last pixel of the texture all the way to the border of the screen.
It doesn’t look very pretty, but it will look slightly better when I do intermittent colors for those segments that are very far. No different height lines on the horizon does help a bit.

I could probably with relative ease put the sprites on the road that “create the design” of each level.
On the Vita version I called them “Billboards”, because they are always facing the viewer. Each would have a sprite, and a position in the “world”. It will eventually have a collision box, and they were even able to move them on the 3 axis (horizontal, vertical, and depth) on the Alpha Version of the game. Unfortunately We’ve only used those features for the blood from exploding deer, and the waves of the starting level.
Deer were able to move, even have cycles of movement. But I guess I ended up doing the scripting language too dense for anyone to be able to design those into the levels.

In the meantime I got this:

Big Version click here.

##END of Second Objective


##What’s Next

So now I’ll think about creating one last layer of machine communication. The Frame Buffers are basically Textures where you render your screen, that are not your screen. You render to the screen, keep it, and then render it on the screen on it’s own geometry.
When you render this Texture of the screen on the actual screen, you use a shader, and that means I will be able to do the full screen visual effects that I did on the Vita version, using fragment shaders.

Afterwards I’ll build the structure around importing and loading sprites and road designs into the “game”.
That will leave me with a build that will most likely hold the levels from the Vita version. I will also have to find a way to load atlases while in-game. I did that on the Vita version with copying line by line the new atlas into the memory with the left over Milliseconds I had from rendering each screen.

Again I ended up doing a very extent post, and not many during various days… Oh well.
Sorry about that.

4 Likes

Ok, just for the sake of trying to keep a record of the tutorials I’ve visited:

So I ended up being able to get framebuffer rendering, directly integrated with my previous rendering code.
This will probably be the last “machine graphical interaction” piece of code I’ll create for a long time, probably until the port is done.
I’ll just clean the code and make sure the entire process is autonomous, and reproducible.

What I do here is render (draw) my game not to the window screen, but to a texture (an image). Then I can bind that texture to a geometry, and draw it on the window screen as any normal sprite. The process to draw that sprite on the screen uses a fragment shader. This allows me to program any full screen effect I desire with the shader I create for rendering the screen texture.

This also allows me to render the HUD into another screen texture, and just draw it over the game screen texture, with another different shader. With this I can have the game screen with any silly full scree effect I want to make, but the HUD render on top without any effect, always unadulterated.

[EDIT]
I can also rotate the game texture, and produce this kind of stuff:

Big version, click here!

Also I can now have 1080 window that doesn’t hurt the performance much at all. I just have to stretch the buffer’s texture geometry to the edges of the screen.

Since all textures’s resize filters are of nearest neighbor order, and all scales are integer numbers, you have respectful perfectly squared pixels. =D
[END EDIT]

2 Likes

I really enjoy reading your development posts. I think writing a sprite-based “3d” engine like this would be a lot of fun.

If I understand correctly, you are sort of faking a perspective divide in the fragment shader. Any particular reason you aren’t doing a perspective projection in the vertex shader? Perhaps I’m misunderstanding because some of the images do look like you’re using a perspective projection.

Whatever the case may be, nice work!

That’s the tricky part, I’m not using a perspective projection, I’m using a orthographic projection.
This just means that you can place something at the distance of 1 or of 100 from the camera, the something won’t scale down.
Orthographic camera projections are considered the “2D” cameras on the GPUs now a days.
There’s no way to avoid it, every machine now a days is a “3D machine”. They all have GPU that can process 3D coordinates on a screen… with a perspective projection (since everything uses shaders now a days).

So I can’t do an actual rectangle in perspective (not using depth axis z, not even sending it to the buffer).
I do my fake perspective on the actual track’s code, but I don’t deal with geometry there.
What I mean by that is that when I do the “pseudo 3D game cool fake perspective” math on a road segment, I don’t do it per point on the road. Each segment is a sprite, that means 4 points x,y, and 6 actual points to send to the buffer since it’s 2 triangles. I would do the math 6 times per segment… but I don’t because I only need to get one time per segment.

I keep z points per road segment, right at the middle of the top, and the middle of the bottom. That means that I only need to process the top point of the segment, because the bottom point of “this” segment is at the exact same place as the top point of the previous segment. Then each point has a scale, and the road as a width. I have the top scale, and bot scale and multiply each by the width of the road, and you get the value which the road width should be at that “height”.

Basically it translate into a trapezoid shame totally facing the camera, not a rectangle fading into the back.
Problem is that Trapezoids shapes facing the camera don’t go really well with… well… mostly any GPU.

I’ve used the following tutorial as solution, and it does explain the problem really well at the start of the article.
http://www.reedbeta.com/blog/2012/05/26/quadrilateral-interpolation-part-1/

Basically, the coordinates that map the texture into the “shape” on the screen (not a rectangle, a trapezoid). So the UV needs a perspective correction to make it “look” like an actual trapezoid.

Just as a side idea, if you would to create a system for describing a an FPS level in 3D coordinates, and I used the road’s segment point’s math on each point of the geometry, and this kind of UV correction, you would “translate” everything into weird trapezoids. I’m not use about the effects, but you could very probably achieve a 3D rendering environment with a more… PSX feeling? Maybe even very similar to the weird perspective in Doom.

The enemies on Doom are what’s named “billboard” on games. Basically a “sprite” on a 3D environment, a plane that is always facing the camera, with an image or animation.
Maybe doing an FPS of the like with a 3D render similar to that tittle would be an interesting thing to do.

1 Like

I just noticed what this thread is about.

Thanks Deci Oh Deer Alpha is really fun and am hopeful the game gets a more permanent second life.

1 Like

Thanks for your response Deci. That clears things up. I like the idea of a 3d-accelerated sprite engine.

You’ll probably get to this eventually, but for the deer do you just use a quad pre-sized depending on the depth/distance from the camera? I.e. do you scale it before sending it to the vertex shader? (If you are going to cover this, I’ll just wait for the post. No need to respond.)

It’s not a problem jjsimpso, I wasn’t going to cover it, you can kinda understand the concept on Lou’s Pseudo3D page, and more technically on this tutorial on how to build a Pseudo3D race in Java Script written by Jake Gordon. But you called that shot right because it is one of the most characteristic visual feature on games like Outrun, apart from the actual road effect.

I do send the geometry already “re-positioned” (scale, rotation, translation). Good thing you talk about Quads, because it is a Quad (it has to be composed by 6 vectors because it has to be 2 triangles to the OpenGL, but it is a Quad, that’s how my implementation works it out). A deer would be a sprite, and all the other sprites are a “machine” thing.

I chose to implement a “sprite” as a machine interaction “thing”, because it makes sense to me from my experience from working on the Gameboy Advance. On those machines a Sprite is not an advanced object with animation, collisions and all that stuff. A Sprite is an actual hardware object, that only describes a position at which a particular size of pixels will be pasted on the screen. On some 16, an 32 bit consoles you could even have rotation, center point, scale, and a few more advanced funny features that now a days could have their own equivalents in shaders.

On the original Outrun arcade “machine”, their sprites were really basic… they didn’t even had scale and the CPU was too poor to do pixel scaling. So they used a technique that it’s widely used now a days in 3D, mipmaping =D. Basically the entire “atlas” had about 4 or 6 version of a sprite, in various different scales.

Because of them having to use that technique, they were forced to “round up” the maths pretty wildly. Having to render at 30fps was also really helpful, low cost. So the sprites would only be rendered at the height of one of those segment point (either the top or the bottom one). I said the previous post that those heights (segment points) keep a scale when they are projected to the screen. If sprites are only rendered at one height per segment (bot or top), each segment as an N number of sprites.

So those sprites have a X and Y coordinates on the level design (from the center of the road, left is negative, right is positive). I have to multiply that coordinate with the scale of the segment point on the road, and translate them again with the segment’s own translation, and also scale the original size of the sprite with the… same old segment’s scale.

That will give me a new position and scale for the sprite. I have to calculate for each sprite when the segment itself is drawn. What my own Sprite Structure do when it is drawn on the screen (or frame buffer), it asks for 6 points from a vertex buffer, and applies the math of describing a QUAD of a scaled size at a specific position. Also updates the equivalent UV (texture coordinates) data.

But that is all the “drawing” sprite does, and all that it did in 2D machines. Changed a few numbers on the memory.
Afterward at the screen rendering, either your program was ready changing all the sprites or not, the machine will interpret those numbers as orders to draw pixels on the screen.
What I do is trying to copy all the geometry (6 points per each sprite I want drawn), at a “good” time for being it all there 60 times per second.


Anyway, did some working around with the Frame Buffers. I think they are already good to go.
Thanks to these Frame Buffers I was capable of achieving a great deal of many things and a few pictures.

For starters I can have various frame buffers, each with it’s own position and rotation:

And I was able to achieve various different render and/with different display resolutions:

So now all I need is getting some sprites renderer in there.

3 Likes