I will use this space to log my everyday work and thoughts.
Author: Harsh
Ray Tracer– Round ‘n’ Round
An interactive ray tracing tutorial/experience!
WIP– Interactivity of the Tutorial Pending
I would love to start every story with “From the time immemorial…” which lets me, as a poignant storyteller, build everything up with impeccable detail and dwell over stuff with such precise accuracy that it would blah blah blah…
I was already on the edge of implementing a raytracer, and with all the hype about real time raytracing with NVIDIA’s RTX series and Unreal’s Raytracing Demo, I was pushed to writing my own little ray tracer.
Here’s my attempt at making a modular ray tracer in JS. I have chosen JS for this particular project because: firstly, it’s easy to showcase; secondly, it provides me the flexibility of remote development. I used an online IDE called CodeAnywhere, which makes it really easy to code from anywhere, literally. I just have to setup my server (which has the code) with SSH and I am good to go. Say I go to a friends place, I get bored. I log into codeanywhere.com and off I go coding away! Also, it’s such a breeze to test it given that this is a remote development setup. I just open the website where I am hosting the code. Voilà.
Coming back to what this post is about, a ray tracing is a rendering technique where we generate the output images by tracing the path of the rays of light. Of course we cannot simulate every light ray, so we do some optimisations (read tradeoffs). One such optimisation is: instead of casting rays from the light where most of the rays might not even hit the camera (which would result in wasted computations), we reverse trace the path the light would have taken if it hits a pixel on our viewport. Another level of optimisation (again, read tradeoff) which I have used in my case, but isn’t usually done if better results are desired is this: we cast only a single ray per pixel. We consider FOV and aperture of the camera to construct a ray for every pixel and trace it.
Tracing is basically just seeing where the trace line hits (intersects) an object, let’s call it P. Based on face on the object, a primary color can be picked. To get the illumination, P is checked for its visibility from the light sources. If yes, how far and at what angle is the light from the object is taken into consideration to conclude the illumination of the point, P. For reflections, we calculate the angle of reflection based on the law of reflections and continue to trace the new reflected line to check for new intersections. If there’s an intersection, we repeat the above logic to get the color of the point. This parameter of levels of recursive reflection can be decided as desired. Usually, human mind does not notice if there are reflections beyond three levels, so that can be taken as a standard for our little ray tracer.
This amount of thought should be enough to come up with a toy tracer. One thing that can be noted is that we can achieve more materials like metallic look by slightly multiplying the normals at point P with seeded random numbers.
Simple Ragdoll Physics– Dance For Me!
Giving life is obviously the best feeling there is. This is probably why mothers want to be mothers! Well, if you can’t afford to be a mother, like me, fret not, computer graphics can take you a long way. (I am not endorsing computer graphics over motherhood. Don’t sue me mothers!)
I am going to give a very brief introduction to building a physics engine that can power a ragdoll.. and henceforth, power your emotions! 😉 You can go ahead and make a game… or a dame (if you know what I mean… I mean nothing, you pervert)!
If you’ve ever thought that building a physics engine would be so cool when you were playing those physics based games, but again, thought that the implementation could be daunting, let me assure you that it is a very simple task (ahem.. watch out for my oversimplifications. No… Just kidding, it’s actually pretty simple). You’ll probably have a harder time to set the OpenGL environment up than to build this physics engine. You can find this part of setting up OpenGL in a lot of places on the internet. In fact, you can even find a few articles about building a custom physics engine as well. But I’ll go ahead and write my article and try to do a better job at introducing physics based graphics with OpenGL.
Let me start with the (not so) boring theory. You should know a little of Newtonian Physics to get a context of what this is about. You must be extremely uneducated to not know about Newton’s Laws. You shouldn’t even be here if that’s the case. Moving on, these equations should be familiar:
s_{n} = s_{p} + v_{p}t + (at^{2})/2
v_{n} = v_{p} + at
We can of course use these equations to power our engine. Or, we can use something better! Varlet integration. Varlet Integration uses an approximated version of the Newtonian Equations to find updated positions of particles which are subject to our forces:
x_{n} = 2x_{c} – x_{p} + a Δt^{2}
n, c and p correspond to new, current and previous respectively.
Varlet integration is very popular. This could be because this equation saves us from the exponentially increasing t^{2} term which occurs in the former set of equations, which will potentially introduce errors as time increases. Also, this equation turns out to be better for the general implementation because s_{c} and s_{p} can be just swapped. Where as, in the former case, new positions and velocities must be calculated separately. This separate calculation of velocity can also introduce instability in the system.
Here are the things that you need to be ready with:

 A simple OpenGL (of course you can use any graphics library) program that plots points in 2D
 Zeal
This is just a proof of concept, so we’ll do away with 2D. Note that extending this to 3D must be easier than a cake walk. Let’s build that physics engine in tiny steps.
The Gravity
I’m assuming you have an array, vector or an equivalent to track the fucking points. Now let’s implement a part of the fucking equation to drive the fucking points down to earth. The fucking points need to learn the fucking lesson, don’t they? To do this, of course, we can implement only the acceleration part of the fucking equation:
x_{n} += a Δt^{2}
Where “a” can be –10 or whatever the fuck you want. We’d see something like this:
Of course, we can have any fucking number of force vectors, which can then be added to find the effective fucking acceleration of the fucking particles.
All this swearing must be enough to apprise you of the fucking gravity of the situation!
A Trajectory
With the remaining part of the equation, we can give initial velocity of the particle. This will allow us to model a trajectory with initial velocity vector as the control parameter. Something of this sorts:
x_{n} = 2x_{c} – x_{p} + a Δt^{2}
If, x_{p} was (0,0), x_{c} could’ve been (1,1) to get an initial angle of 45º. You get the idea.
Constraints
What we saw until now was finding the next step of a particle in a seemingly infinite universe without any limitations. They were just a set of points that were influenced by the forces that we modelled as if nothing else was there that influenced the movement. But this is not really the case in real life. There are a lot of interactive forces that act between infinite number of points. So, to model something like this, let’s put constraints into our existing engine. Of course, we’ll limit ourselves to a minimalistic set of points that are enough to make our animation look convincing enough. The flow of our program would be like this:
1 2 3 4 
void generateFrame(){ varletStep(); satisfyConstraints(); } 
Yet again, let’s build the constraint system in small steps:
The Bounding Box
This is the simplest constraint to implement. Just check if the particle has moved beyond the box and limit it to the box.
1 2 3 4 
cVertices[i].x = (cVertices[i].x<50) ? 50 : cVertices[i].x; cVertices[i].x = (cVertices[i].x>450) ? 450 : cVertices[i].x; cVertices[i].y = (cVertices[i].y<50) ? 50 : cVertices[i].y; cVertices[i].y = (cVertices[i].y>450) ? 450 : cVertices[i].y; 
The Distance Constraint
This is where we’ll see things getting interesting. Adding this constraint is a huge step towards modelling our ragdoll. As the name says, this step involves fixing the distance between any two points. For the sake of convenience, let’s say the points are A and B. We put a constraint that the distance between these two points must be say, d. If the new distance is d’ is different from d, then the delta, d’–d is found. Each point is pushed delta/2 away from or closer to each other based on the sign of delta. The direction of this push/pull is parallel to A–B.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 
// Original Vector AB Vertex r = Vertex::delta(vertices[a], vertices[b]); // Vector A'B', A' and B' are current positions of A and B Vertex d = Vertex::delta(cVertices[a], cVertices[b]); // Finding the scalar distance to push/pull by double dp = Vertex::dot(d, d); double rl = Vertex::dot(r, r); double sc = ((rl)/(dp+rl))0.5; // Associate that scalar to vector 'd' d.scale(sc); // Push the current positions, thus satisfying the distance constraint cVertices[a].minus(d); cVertices[b].plus(d); 
Do keep in mind that this constraint is added to the existing set of constraints, The Bounding Box constraint. We see something like this:
Grand Finalè
Believe it or not (no, just believe it), what we’ve built until now, is enough to model a stick figure. So let’s build it and fool around with it! ?
Go ahead and make a simple stick figure with dimensions that please you. Put the distance constraints to keep it from collapsing into a singleton, and save yourself from the embarrassment of being called like a simpleton. Also, keep in mind that you’ll need a few distance constraints where you don’t really wanna draw a stick. You’ll hopefully understand it better from the GIF below.
Appendix
You can of course explore other kinds of constraints, or structures of the constraints. There’s a lot of exploratory opportunity (…not that kind!), and there are a lot of things can be implemented just with the knowledge that we’ve gained here.
For instance, to implement dragging of the model, we can add a new constraint on some point. The constrain here would be that the point position is equal to the mouse coordinate. The other constraints ensure that the model moves with that one point that we’re moving.
And this can also do a manageable job at simulating cloth. Just set the vertices and constraints accordingly to generate a cloth and sim away!
This writeup and my implementation is inspired from the classic paper Advanced Character Physics written by Thomas Jakobsen.
Pat yourself, because you can be proud that you’ve just learnt something that was used in one of the best franchise games of the current world! Yes, this was used in a 2001 game, Hitman: Codename 47, for the first time in the gaming industry.
And now, time for a small pep talk… What we learnt now is just a tool. Remember, all the knowledge in the world is just that, a tool. The greatness lies upon the one who can make the best of them. I know, we’re all tired of people quoting random stuff and associating it with Einstein, but trust me on this one, I’ve got this from a reliable source: “Imagination is more important than knowledge. For knowledge is limited to all we now know and understand, while imagination embraces the entire world, and all there ever will be to know and understand.”
Believe yourself to be an artist and you are one. And do not hesitate to take up unconventionalities.
“I am an artist you know… it is my right to be odd.”
― E A Bucchianeri, Brushstrokes of a Gadfly