cool
I’ve been working on some not too visually striking stuff although there’s now configurable upscaling/downscaling of the main picture (here it’s a 2x downscale combined with MSAA), but here’s a nice one. From basically the beginning I’ve been handling character directions by flipping only the pose, ie if you look at the magnificently textured bodies of the fighters you’ll see they present a different side to the viewer instead of sprite-style mirroring. If they had an eyepatch, it’d be behaving like Sagat’s in SF4 rather than Baiken’s in GG Revelator.
However now it’s freely configurable per mesh by tagging the meshes you want to mirror in Blender, so as those boxy gauntlets demonstrate I can mix mirrored meshes (gauntlet) and flipped poses (body) on the same character.
Next I’m probably finally gonna work on throws, which I’ve been putting off forever since they require both fighters to animate together and ideally I need an approach where I don’t have to animate all possible fighter combinations. Usually in sprite games (and Arcsys games) there’s a set of generic throw victim poses shared by all characters, and in modern 3d games inverse kinematics are used to do some dynamic posing, and I’d sure as heck like to avoid that second one because writing an IK solver is several steps above the simple animation playback stuff I do.
solution: all characters turn into a sphere during throws, revert back to normal model after splatting on the ground. clayfighter 69 baby
Sounds like I’ve been unfairly ignoring Clayfighter in my fighting game research! That solution’s kinda both super clever and super terrible.
tbh i don’t think even clayfighter stooped to this level. i guess it was too busy being racist.
so really it’s just CaniaFighter 1
Ahaha, all good then!
Shadows!
No self shadowing because I’m doing the Xrd-style let’s-pretend-they’re-sprites thing but it’s an option.
As a bonus, the freak show compilation from figuring out said shadows:
Although that took, like, two evenings. The bulk of my efforts for the last couple weeks has been working on networking. For all I bragged earlier this year that I’d added netcode, and rollback netcode at that, there turned out to be quite a few bugs, althoug they weren’t in the rollbacks themselves. Determinism, fast saving and restoring of state, I’ve had all that in mind from early on so it’s been pretty solid and I can get it to run under some pretty high pings.
Rather, bugs have been in the surrounding network state machine (“why does the game freeze on choosing instant rematches when the ping’s almost zero?”. Actually still working on that one, although I sorta understand what’s wrong) and some conceptual issues, foremost what should be part of the stuff that rolls back and what shouldn’t and how do you manage the connection between the two so that, say, you’d don’t get 10 wins from a single KO. And also switching between tight synchronization through rollbacks and sections where it’s expected that both peers may take different amounts of time to do a thing (ie loading screens).
Also need to find some camera smoothing function because the camera snapping around is the one thing that’s very immediately jarring during rollbacks.
I’ve been converting one of the retroarch NTSC shaders for shenanigans and while I’m not quite sure I have the math quite right yet it’s starting to look pretty nice
Next to combine it with my PS1-style dithered wobbly polygon shaders I guess!
Cleaned up and correctly linearized my rendering path, which of course meant the next thing was to add bloom effects to make cool neons or something
I finally added the necessary building blocks for throws over the weekend! By dumb luck or through well informed design considerations I’ve since forgotten it turned out to be reasonably simple and few alterations were needed.
That being said that terrible throw anim is a collector now, as Blender refuses to load the source file for it again. Thankfully it was separate from everything else so the character is otherwise unharmed.
Made another room for the Caves of ZZT remake collab:
I’ve had a blender-shader-nodes-to-HLSL (the shader language used by DX11) conversion script lying around for quite a while, so I finally took some time to integrate it into my mesh exporter and overhaul the whole mesh conversion code to clean it up a bit. The result is I can pretty much design my materials and have them look the same in my engine, complete with being able to override the colors per player.
So, it kinda looks the same, but all the materials were authored in Blender this time around!
Of course, this is only really true for roughly one third of Blender’s shader nodes, the ones I’ve actually used in my tests, and that’s biased towards non-photorealistic stuff. PBR is hard, also I wanna make animu graphics!
Anyway, it’s pretty flexible, and whatever node you want to be palettizable or otherwise needs to access an engine-specific value you just have to put in adequately labeled frames. So for example in that first shot two colors are palettizable, and in the second one that checkers node is replaced during export by a call to the engine’s shadow function.
The generated shader code is hardly elegant and I’m sure there’ll be room for optimizing it in the future but I’m pretty happy about it and it’s immensely rewarding to see the materials directly appear as designed in the game. I’ll need some interface for making color palettes though because typing the values in text files gets old really fast. I also want to make some material properties animatable, so there’s always more work to do!
The GUI is a bit slapdash but I’ve got my palette editor! There’s basically three ways to change colors. As per the previous post a shader color input that’s been properly tagged is available, but you can also override the hue for textures (like I’ve been doing with the uv grid for basic player colors all this time) or if textures are 8-bit palettized textures, the palette’s available for modification. For the test case I just stuck one of those on that billboard behind each character, but I think that’s gonna be pretty useful in the long term for the kind of cartoon look I want to achieve.
Wanted to see if I could script the general behavior for a darkstalkers-style symmetrical clone, or how many commands I needed to add to the script engine to make it possible. Turns out adding the ability to set an animation frame independently of state was enough!
EDIT: tested a GG Eddie-style setup, ie being able to spawn an independently controlled puppet that has its own states and some quirks like attacking on button releases:
This one didn’t need any new instructions but it revealed quite a few bugs and quandaries regarding inputs and what they’re relative to. Plus so far I’ve been following the mugen model where everything related to a character (including projectiles and puppets) shares the same state and transition list, so you can do lots of weird stuff but when you want to segregate two state graphs you’ve gotta be careful.
Here’s a thing unrelated to the fighting game, an impostor billboard generator and renderer (model courtesy of Omikron) which I made after seeing videos of Cyberpunk’s faraway traffic (which only has horizontal sprites):
I’ll try adding some parallax and blending to it to make it less choppy but if I ever wanna make a wing commander clone this one is all I need!
this rules
wow damn i’ve been wanting to make something like this for ages. what was your process if u don’t mind my asking
Sure!
So for starters, it’s a billboard. Take a bunch of snapshots of the object from various angles, choose one based on the camera angle, display on a quad. But the billboard is not camera-facing. Instead, the billboard’s orientation in the world is the same that was used to take the chosen snapshot.
On the math side, your workhorse is a duo of functions that can map a 3d vector (the view direction) to a 2d plane/grid (the sprite sheet), and the other way round.
In the original Wing Commander it was spherical coordinates (latitude-longitude) but I’m using an octahedral mapping instead, as suggested by that blog post about Fortnite trees, because it spreads out the snapshot better without oversampling at the poles (the videos are dead but they’re about fancier stuff like parallax and blending, for this discussion the interesting part is the early gifs and pics that show the octahedron thing), also I wanted to code a cool octahedral thing.
It looks like this:
The functions I used for the conversion are from that page.
But the octahedral thing doesn’t particularly matter, just that you can do the back-and-forth between the sprite sheet and direction vector. And so:
-
To generate the sheet, assuming the grid spans a 0-1 interval, for each square, take its center and convert to a direction, and use that direction to generate a “lookat” matrix for the camera. The camera projection is orthonormal (but that’s not a requirement unless you wanna do parallaxes later) and scaled to fit the mesh’s bounding sphere and possibly more to avoid some pixel bleeding.
-
To choose a billboard sprite to display, convert the camera direction or eye vector to 2d. The square that contains that 2d point is the one you should show.
-
To obtain the 3d orientation of the billboard, convert the center of the chosen square to 3d and use the same matrix you’d have used for the camera. The size and center of the billboard are those of the bounding sphere used for the snapshots.
With some extra math (project the up vector of the matrix on the camera plane then rebuild a matrix from that and the camera direction, I think) you can have a fitting rotation and still be perfectly camera facing but I like the slight perspective variation.
Also of course, Unreal can generate those out of the box these days, and there are also unity implementations there (for free) and there (30 bucks) with more bells and whistles. They’re actually for LoD but they can probably be coerced into a Wing Commander-style use, too.