Ray Tracer!

For fun, I decided to implement my own version of ray tracing in one weekend. It runs on desktop and I also have a webassembly version on this here site (use this one on iOS/Safari or if the other link doesn’t work — some browsers don’t support SharedArrayBuffer). I did a few things different with my version:

  • SDL2 Rendering. The book outputs PPM files through stdout, but I realized that I didn’t have a single program that could view PPM files. A quick google search didn’t give me anything very promising. So I just went with outputting directly to the screen with SDL. It was a little bit more upfront work, but it made debugging a lot easier later on.
  • Dear IMGUI for the builtin UI. This also makes it a lot easier to test things, since you don’t have to shutdown the program to change various variables and see how it effects the output. I have to say, I’ve always wanted to use IMGUI in a project but never had a chance. It’s now one of my favorite libraries. I know the limitations of Immediate Mode Gui’s, but it’s definitely nice not having to pull in the heft of something like Qt or Gtk.
  • GLM for math instead of custom math classes. I just didn’t feel the need to reinvent the wheel here, especially since GLM is very optimized and header-only.
  • Multi-threading. C++ has come a long way here, I remember in the past using the OS libraries and having a miserable time. Amazingly, using std::thread and std::atomic I got threading working in a day without any significant bugs. Which, since I never trust threading, I’m sure there’s something lurking in there, but so far everything has been smooth. I haven’t done any optimizations for eliminating false sharing on cache lines or anything like that, but so far the speedup seems to be roughly linear, IE, 12 threads is about 12x faster than one thread.
  • Emscripten for web assembly. Fortunately Emscripten has very good SDL2 support, and surprisingly, it can even do multithreading. The only downside with its multithreading is that due to browser security issues, you have to serve up HTTP headers that significantly restrict what’s allowed on the page. So I had to figure out how to modify apache configurations in order to get it online. Overall it wasn’t that hard though.
  • CMake for cross platform build support (tested w/ Visual Studio 2022, XCode, web assembly). I have mixed feelings about CMake. On one hand, it’s an incredibly useful tool, and I have no interest in maintaining a lot of separate project files so it’s great in that regard. On the other hand, I absolutely hate the syntax of its custom language. I’m still baffled as to why they couldn’t have just used a scripting language like lua or javascript or python. CMake’s syntax is just weird, things like “else” look like function calls, it’s not clear when you’re supposed to quote things and when you’re not supposed to. It’s also not clear when the order of function calls matters and when it doesn’t, or if they’re function calls or some sort of declaration. I’m reminded of the old Bjarne Stroustrup quote: “There are only two kinds of languages: the ones people complain about and the ones nobody uses.” Replace language with tools and I think it applies to CMake also.

Anyway, in terms of future goals, what I’m planning next for this is:

  • glTF mesh/scene support
  • Lighting/Emissive materials
  • Textures
  • PBR materials
  • Optimize ray hit detection with something like octrees (haven’t really decided on a data structure yet)
  • Maybe a React UI for the web version (although I really like IMGUI even there)

Updated site!

I put more things on here! Which, depending on the quality of said things, is either a gift or a curse. It’s been languishing for a few years, so, I decided I should have more stuff.

Specifically:

  • Moved over from Dreamhost to Digital Ocean and switched to a theme made in the last decade. Somehow Dreamhost was taking 8+ seconds to respond, so, this is a lot better.
  • An “Art” section, where you can get things like a Font of my handwriting. Exciting!
  • A “Tools” section (Note: I have many more that I plan on adding here)
  • A “Libraries” section (Same note as the tools section)
  • Some Shaders I did for DragonVale back in ye old days
  • Various links to horrible game jam games I’m extremely embarrassed by oh god just kill me

Hopefully soon I’ll have more to share on the game I’m working on, along with some more tools and libraries.

3rd Person Camera Movement with Compute Shaders

This is something I was playing with a few years ago, but I just bumped into it again and I thought it was neat enough to share. The problem I was trying to solve here is how to keep a 3rd person camera from colliding with various and/or having the player occluded, while also keeping it relatively smooth.

The classic way to do this is with a simple ray cast, and, well, that’s certainly an effective strategy. The main problem, from my perspective anyway, is it can lead to abrupt distance changes, and the objects that cause those abrupt changes are by definition out of view (because they’re behind the camera), which makes things feel a bit unpredictable.

Most games solve this by just having good level design, but I thought a more interesting way to do it would be to calculate out a smooth transition. One way to do it would just be to shoot out a lot of rays, but that could get expensive and you still have issues with small colliders possibly being between the gaps. I thought using the depth-buffer might help with this, and indeed it does, so here’s a quick demo of what it looks like. The red texture at the top is a (small) depth buffer that’s looking out from the character’s focal point towards the actual camera; and some layers are used so it only renders certain things (ie, things you don’t want the camera to collide with NPCs and small objects).

I’m using a compute shader to calculate the weighted average to figure out the final camera distance. It works fairly well, although it is admittedly a little more expensive than I’d like. At some point I might add it to the asset store, or just create a download for it.

A BSP Compiler for Unity 3D

This is a bit obscure, but recently I’ve been finding myself needing to write some code to answer the question “is this object in an area or not.” Unity3D provides some functionality to do this, but it tends to be a bit imprecise in that only very simple primitives can be used (box collider, sphere collider, convex hull, etc.).

So, in order to automate this a bit, I built some tools to make this a bit easier. You can see/download the result on github.

screenshot

On the left is the original mesh, on the right is a visualization showing which convex “zones” are created.

What this basically does is takes the selected mesh, and “fills” it with convex colliders marked as triggers. You can then use those triggers however you want.

Some caveats: It’s better to do this with low detail models, as the compiler can take quite a while on larger models. Also while there’s not a hard requirement that the meshes are sealed, the results generally will be better if they are.

LN39

Here’s a quick video of the game I’m working on right now. I’ll write some more details up soon.

Procedural World

I’ve been working on some prototypes with regards to generating procedural worlds recently, having been inspired quite a bit by this excellent series on polygonal map generation, and this great blog. I’m trying to go for something a bit different than what I think they’re after, but there are a lot of great ideas there.

(For those that don’t know, “procedural world” just means “giving a program a set of rules so it can build a world from scratch without any human intervention”)

One of my goals with this project is to try to generate a procedural world where there are still “dramatic points of interest”, if you will. When I say dramatic, what I mean is that some areas should feel differentiated and special, it shouldn’t feel too uniform. One of the problems with generating a world is that there’s sort of a sameness to everywhere, which I’m trying to avoid.

This is what my most recent attempt looks like, a quick hack-up C# app that I can also hook into Unity:

 

The idea is to generate a “feature map” describing what the dominant features of each area is, and then generate the terrain based on that. Why not generate the heightmap first and then the features? Mostly because the feature map is relatively clean in terms of being simple convex polygons which are easy to reason about programatically and mathematically. I have a lot of code that tends to be of the variety of “if this area is surrounded by these certain features, do this certain thing”. Height maps are pretty messy, so while you could go the perlin noise route and generate the terrain first, and then try to classify each area later, I thought it would be somewhat easier just to  throw down the polygons first, and then figure out a heightmap that works for it. I think that approach creates something that’s a bit less realistic, but I’m mostly happy with the results so far, although I feel like it could use quite a bit of post processing to look a bit more natural (hard to see from the screenshot, but the transitions are very rigid at the moment).

The goal with the feature maps is mostly to ensure that I end up with something playable in a deterministic way (ie, you can run checks against it and so on), which I think is a bit hard when you’re just working off pure noise.

In this screenshot, the different feature areas are color coded. In this case, the green areas are “highlands”, and the teal areas are canyons, and the dark green areas are bits of forest or foothills.  (You can guess what the blue and grey ones are). The hard to see orange squiggly areas are a (very rough) stab at defining road layouts for towns, although the towns are unconnected at the moment.

You can get a clearer idea of that from the base feature map below, which the above was generated from:

 

Basically this is just voronoi noise polygons + lloyd relaxation + sampling from a noise source w/ some rules, a lot like the link to Amit’s series above. I still feel like this is all a bit too random at this point though. It lacks points of interests. The next step is going to be to run some simulations across the generated landscapes to come up with more of a world with something of a story to tell.

Voronoi noise is cool

I’ve been playing with procedural world generation for a while now (because I’m lazy when it comes to level design), which of course means noise functions!

I’m really starting to dig voronoi noise after looking at some of the stuff you can do with it So, I hacked together a voronoi noise previewer:

(Next step: making it actually update in real time. It’s slow!)

Lambda!

Made this goofing around to see if I could compress an entire python program into a single expression. Who said Python Lambda’s aren’t powerful?!

(lambda:
    not globals().__setitem__('sys', __import__('sys'))
    and not globals().__setitem__('this', sys.modules[globals()['__name__']])
    and not globals().__setitem__('time', __import__('time'))
    and
    #program
    [setattr(this, k, v) for k,v in {
            'set_color': (lambda c: w(['*', ' '][c])),
            'abs': (lambda t: (t + (t >> 31)) ^ (t >> 31)),
            'w': sys.stdout.write,
            'smash': (lambda t: -((t * -1) >> 31)),
            'color': (lambda n,k: set_color(smash (k & (n - k)))),
            'col': (lambda n, k: k <= n and not color(n,k) and col(n,k + 1)),
            'row': (lambda n: not w(' ' * (40-abs(n/2)))
                              and (col(abs(n), 0) or True)
                              and not w("\n")
                              and (abs(n) < 63 or n < 0)
                              and not time.sleep(0.05)
                              and row(n+1)),
            'triangle': lambda: row(-60) or True and triangle()
        }.items() ] and triangle() )()