blog zone

Website upgrades!

I added some new features to the server

RSS Feed

Who knows if anyone will need this, but its neat!!
I was inspired by everest's page here: https://everest-pipkin.com/teaching/handmadeRSS My implementation is a bit more generative but still inspired by the handwritten simplicity.
Entries to this blog are simple markdown files that get converted to html using snarkdown. Then I manually add entries I want to have posted to a js object:
export const blogPosts = [
  {url: 'website-upgrades.md', date: '2024-10-01'},
  {url: 'particles.md', date: '2022-06-01'},
  {url: 'game-engine-input-tracking.md', date: '2022-05-22'}
]

The RSS feed is generated from this list of entries on request.

Lil game engine particle system

I made a lil thing for my game engine that simulates particles! I think its kind of clever so I wanted to share it. There are a few write-ups of how to use this technique elsewhere, mainly: https://offscreencanvas.com/issues/001/ but I wanted to jot down my thought process.

Goals!

The Parts

The transparency part is easy, I'm just gonna stylistically render everything opaque and use the size of the particles to simulate transparency-ish. I think it looks cool when everything is the same shape and flat shaded.
The slightly tricky part is simulating things on the gpu. Instead of keeping the positions of each of the particles in regular memory, I can keep those positions in the gpu memory by storing it in a texture. Then, I can render the particles themselves using instanced rendering. Render 4000 spheres, and offset each one by the position texture indexed by the instance id.
The fun part is using a technique called gpgpu (gp for general purpose) to draw the new positions onto an offscreen texture with the old position texture as input. If I was using WebGL 2, I'd have access to transform feedback stuff, but I am limited to WebGL 1 with regl. The only particle system unique part of that ends up just being a fragment shader where each fragment is one position in the position texture.
For a firey effect, I add some curl noise and vertical velocity. I also store the remaining 'life' of the particle in the alpha channel, and then use that to change the size of the particles. Once the life reaches zero, I reset the position of the particle. The key thing here is that I also pass the model transformation matrix into this shader. By transforming the particle's starting position by the model matrix, and doing this gpgpu processing at the same transform level as where the particles should show up, they'll get spawned in at that location on the model and then go on to be simulated in world space.
particles.png

Upgrades!

I'll probably switch to rendering billboards that write depth values (so i still get 3d spheres) instead of actual sphere meshes.
I also wanna figure out a cool way to pack all the particle data into a single texture so that I can have multiple systems in a single texture.
And I need to upgrade my scene graph so that I can "attach" particle systems to parts of meshes (right now i'm just calling drawGraph and then drawParticles right after in the same transform space (which is also what attaching is, but i don't wanna think about it)).

js game engine

hi hello! im going to talk about what im working on!
today: it is my js game engine
its mostly built around regl
i made a simple shader and wrote a thing to render gltf files
that all currently looks like this:
gltf.png
and now im working on input tracking!

input tracking

i gotta track the mouse so that you can make games, because games are Interactive
in order to click on things in the scene, we gotta take the mouse position and place it into the game scene
then, we shoot out an invisible laser in the direction the camera is facing and see if it hits anything
i started with some code from one of my previous projects, trains game
const rect = canvas.getBoundingClientRect()
const clipCoordinates = [
    (2 * (mouseState.x - rect.x)) / rect.width - 1,
    1 - (2 * (mouseState.y - rect.y)) / rect.height,
    -1,
    1]
const inverseView = mat4.invert([], context.view)
const inverseProjection = mat4.invert([], context.projection)
const cameraRayA = vec4.transformMat4([], clipCoordinates, inverseProjection)
const cameraRay = [cameraRayA[0], cameraRayA[1], -1, 0]
const rayWorldA = vec4.transformMat4([], cameraRay, inverseView)
const rayWorld = vec3.normalize([], [rayWorldA[0], rayWorldA[1], rayWorldA[2]])
we're taking the mouse screen position and turning it into device normalized coordinates or clip coordinates
this mostly just means taking a pixel position like (300, 200) and making it fit into [-1, 1] for x and y (and then putting it into a vec4 for easier math later)
the clip coordinate is the position on the screen, but we need this to be in world space
so we use the view projection (the matrix that transforms an object into the view of the camera as if it were at the origin) and apply that to the clip coordinates
and then also apply the view matrix (the matrix that transforms the camera into its position) so that when we render from the perspective of the camera, the mouse ray is in front of it (instead of in front of the origin)
the result rayWorld is the laser we can shoot into the scene, starting from the camera
I used an existing library ray-aabb to do the actual intersection logic, and I just collide the ray with a big box to represent the ground
const mouseDir = getMouseRay(mouseState, context)
const mouseRay = createRay(context.eye, mouseDir)
const collisionDist = mouseRay.intersects([[-20, -1, -20], [20, 0, 20]])
const collision = vec3.add([], vec3.scale([], mouseDir, collisionDist), context.eye)
and now we have a collision position that i can place an object at!
clicking on objects in the scene is a little trickier, because we might have lots of objects and we don't necessarily want to raycast against every triangle on every object (that would be a lot of work)
so next up is to add some kind of spatial partitioning data structure and keep all the objects in there so that we can test against a small set of them

spatial partitioning

in trains game i used an rbush, mostly because it sounded funny to put a bunch of trains in a bush, but its also a fairly good choice because the trains were all running around in 2d and the rbush lets you quickly get all of the things that are immediately around you in 2d
for this engine, i want to support 3d raycasts and maybe a simplified collision system for a few primitive shapes (i want to avoid having to use a heavy pre-made physics library like ammo.js)
right now im looking at either rbush-3d, which is an extension of rbush, or implementing a ball tree, for obvious reasons of it being funny (important) and because i think if all the raycasting and collision is based on spheres, the engine will have an interesting, unique gamefeel
it is very important to me to not spend a lot of time on the physics, because that can end up being a very deep hole that no one has really found a perfect solution to. i also want the physics to feel like those ps1 shaders that have the vertices jumping all around because of weird floating point precision