photo_resistor – A Mono-Pixel Camera

Ever wanted to take a photo from a single photoresistor? Of course, you have; who hasn’t? (/s)

With a million dreams gleaming in my eyes and 0.000001 million pixels mounted on my servos, I present to you, the photo_resistor!


TL; DR: this is a photoresistor stuck on a pan-tilt servo mount controlled by an Arduino Nano with information being transferred via the HC-05 Bluetooth module.

If you’re running wild with your imagination about how the output of this dingus of a device would look like, let me help you get back to the ground: imagine a camera sensor which is the king of the Rolling shutter, with a minimum “shutter speed” of about 15s, has an anorexically low ISO (sensor sensitivity), a dead fish’s eye distortion and has an inbuilt blur-filter (take that Instagram camera!). Well, this is basically it. If you stop squinting, you should see a clear picture of the tube-light being captured using the beloved photo_resistor below. I said stop squinting your eyes!

While I had been playing around with the idea for some time now, with some free time on my hands, I finally got around to materialise this contraption. Let’s break this down to simpler modules; this could be divided into five clearly defined modules. The microcontroller had these responsibilities: reading the light intensity value, controlling the servo, transferring the information via Bluetooth. The PC had to do these: read the data over Bluetooth, render the scanned image. For photo_resistor to work there’s one last thing that needs to be taken care of: the way the light falls onto the photoresistor.

By default, the light rays fall from all directions onto the photoresistor. This will end up polluting our readings and make the final captured image a blurry mess.

So, we encase the photoresistor with a cylindrical pipe with non-reflective interiors walls. The pipe will act like a reverse-laser, making sure that the light rays falling straight onto the photoresistor at ~90° angles hits it directly and the other rays reflect off the almost-non-reflective walls multiple times and die down in amplitude by the time it reaches the photoresistor. The actual pipe I used was a black sheet of paper rolled into a cylinder.

I got down to building each module individually and started putting them together like a car would get assembled on an assembly line. Everything fit in smoothly, servos would move on command, HC-05 would send the data as requested, the python script would receive the data just fine and render the received data. Perfect! I couldn’t have asked for more.

So, conceptually, if I turn the ignition key of my contraption on, it should just work, right? Wrong. The reality is a bitch, innit? While all of my LEGO-like modules were fitting in LEGO-like gloves, it dawned on me that hobbyist servos with hobbyist photoresistors present us with first-rate challenges! I realised that the photoresistor readings had considerable noise and laughable sensitivity. While this did not really come to me as a surprise, what did catch me off-guard though was that the servo had an offset of up to 10° between the time when the armature is moving up vs. the time it is moving down. This resulted in images that looked like how some of the old movies would look if they had the Interlacing Problem.

photo_resistor‘s rendition of A Starry Night
How the shitty servos screwed up my photo

I spent over a day figuring out what had gone wrong before finally concluding that this is an act of god over which I have no jurisdiction. So I turned to the golden answer any Indian would resort to in times of dire necessity, Jugaad. I just added an offset in my code to handle my servos’ atrocity. All in all, it worked out just fine as you can see in the second video above and I lived happily ever after.

You can find all of my code here.

Dance For Me! – Simple Ragdoll Physics

Giving life is obviously the best feeling there is. This is probably why mothers want to be mothers! Well, if you can’t afford to be a mother, like me, fret not, computer graphics can take you a long way. (I am not endorsing computer graphics over motherhood. Don’t sue me mothers!)

I am going to give a very brief introduction to building a physics engine that can power a ragdoll.. and henceforth, power your emotions! 😉 You can go ahead and make a game… or a dame (if you know what I mean… I mean nothing, you pervert)!

If you’ve ever thought that building a physics engine would be so cool when you were playing those physics based games, but again, thought that the implementation could be daunting, let me assure you that it is a very simple task (ahem.. watch out for my over-simplifications. No… Just kidding, it’s actually pretty simple). You’ll probably have a harder time to set the OpenGL environment up than to build this physics engine. You can find this part of setting up OpenGL in a lot of places on the internet. In fact, you can even find a few articles about building a custom physics engine as well. But I’ll go ahead and write my article and try to do a better job at introducing physics based graphics with OpenGL.

Let me start with the (not so) boring theory. You should know a little of Newtonian Physics to get a context of what this is about. You must be extremely uneducated to not know about Newton’s Laws. You shouldn’t even be here if that’s the case. Moving on, these equations should be familiar:

sn = sp + vpt + (at2)/2

vn = vp + at

We can of course use these equations to power our engine. Or, we can use something better! Verlet integration. Verlet Integration uses an approximated version of the Newtonian Equations to find updated positions of particles which are subject to our forces:

xn = 2xc – xp + a Δt2

n, c and p correspond to new, current and previous respectively.

Verlet integration is very popular. This could be because this equation saves us from the exponentially increasing t2 term which occurs in the former set of equations, which will potentially introduce errors as time increases. Also, this equation turns out to be better for the general implementation because sc and sp can be just swapped. Where as, in the former case, new positions and velocities must be calculated separately. This separate calculation of velocity can also introduce instability in the system.

Here are the things that you need to be ready with:

    • A simple OpenGL (of course you can use any graphics library) program that plots points in 2D
  • Zeal

This is just a proof of concept, so we’ll do away with 2D. Note that extending this to 3D must be easier than a cake walk. Let’s build that physics engine in tiny steps.

The Gravity

I’m assuming you have an array, vector or an equivalent to track the fucking points. Now let’s implement a part of the fucking equation to drive the fucking points down to earth. The fucking points need to learn the fucking lesson, don’t they? To do this, of course, we can implement only the acceleration part of the fucking equation:

xn += a Δt2

Where “a” can be –10 or whatever the fuck you want. We’d see something like this:


Of course, we can have any fucking number of force vectors, which can then be added to find the effective fucking acceleration of the fucking particles.


All this swearing must be enough to apprise you of the fucking  gravity of the situation!

A Trajectory

With the remaining part of the equation, we can give initial velocity of the particle. This will allow us to model a trajectory with initial velocity vector as the control parameter. Something of this sorts:


xn = 2xc – xp + a Δt2

If, xp was (0,0), xc could’ve been (1,1) to get an initial angle of 45º. You get the idea.


What we saw until now was finding the next step of a particle in a seemingly infinite universe without any limitations. They were just a set of points that were influenced by the forces that we modelled as if nothing else was there that influenced the movement. But this is not really the case in real life. There are a lot of interactive forces that act between infinite number of points. So, to model something like this, let’s put constraints into our existing engine. Of course, we’ll limit ourselves to a minimalistic set of points that are enough to make our animation look convincing enough. The flow of our program would be like this:

Yet again, let’s build the constraint system in small steps:

The Bounding Box

This is the simplest constraint to implement. Just check if the particle has moved beyond the box and limit it to the box.


The Distance Constraint

This is where we’ll see things getting interesting. Adding this constraint is a huge step towards modelling our ragdoll. As the name says, this step involves fixing the distance between any two points. For the sake of convenience, let’s say the points are A and B. We put a constraint that the distance between these two points must be say, d. If the new distance is d’ is different from d, then the delta, d’–d is found. Each point is pushed delta/2 away from or closer to each other based on the sign of delta. The direction of this push/pull is parallel to A–B.

Do keep in mind that this constraint is added to the existing set of constraints, The Bounding Box constraint. We see something like this:

Grand Finalè

Believe it or not (no, just believe it), what we’ve built until now, is enough to model a stick figure. So let’s build it and fool around with it! ?

Go ahead and make a simple stick figure with dimensions that please you. Put the distance constraints to keep it from collapsing into a singleton, and save yourself from the embarrassment of being called like a simpleton. Also, keep in mind that you’ll need a few distance constraints where you don’t really wanna draw a stick. You’ll hopefully understand it better from the GIF below.



You can of course explore other kinds of constraints, or structures of the constraints. There’s a lot of exploratory opportunity (…not that kind!), and there are a lot of things can be implemented just with the knowledge that we’ve gained here.

For instance, to implement dragging of the model, we can add a new constraint on some point. The constrain here would be that the point position is equal to the mouse co-ordinate. The other constraints ensure that the model moves with that one point that we’re moving.

And this can also do a manageable job at simulating cloth. Just set the vertices and constraints accordingly to generate a cloth and sim away!


This write-up and my implementation is inspired from the classic paper Advanced Character Physics written by Thomas Jakobsen.

Pat yourself, because you can be proud that you’ve just learnt something that was used in one of the best franchise games of the current world! Yes, this was used in a 2001 game, Hitman: Codename 47, for the first time in the gaming industry.

And now, time for a small pep talk… What we learnt now is just a tool. Remember, all the knowledge in the world is just that, a tool. The greatness lies upon the one who can make the best of them. I know, we’re all tired of people quoting random stuff and associating it with Einstein, but trust me on this one, I’ve got this from a reliable source: “Imagination is more important than knowledge. For knowledge is limited to all we now know and understand, while imagination embraces the entire world, and all there ever will be to know and understand.

Believe yourself to be an artist and you are one. And do not hesitate to take up unconventionalities.

I am an artist you know… it is my right to be odd.
E A Bucchianeri, Brushstrokes of a Gadfly