photo_resistor – A Mono-Pixel Camera

Ever wanted to take a photo from a single photoresistor? Of course, you have; who hasn’t? (/s)

With a million dreams gleaming in my eyes and 0.000001 million pixels mounted on my servos, I present to you, the photo_resistor!


TL; DR: this is a photoresistor stuck on a pan-tilt servo mount controlled by an Arduino Nano with information being transferred via the HC-05 Bluetooth module.

If you’re running wild with your imagination about how the output of this dingus of a device would look like, let me help you get back to the ground: imagine a camera sensor which is the king of the Rolling shutter, with a minimum “shutter speed” of about 15s, has an anorexically low ISO (sensor sensitivity), a dead fish’s eye distortion and has an inbuilt blur-filter (take that Instagram camera!). Well, this is basically it. If you stop squinting, you should see a clear picture of the tube-light being captured using the beloved photo_resistor below. I said stop squinting your eyes!

While I had been playing around with the idea for some time now, with some free time on my hands, I finally got around to materialise this contraption. Let’s break this down to simpler modules; this could be divided into five clearly defined modules. The microcontroller had these responsibilities: reading the light intensity value, controlling the servo, transferring the information via Bluetooth. The PC had to do these: read the data over Bluetooth, render the scanned image. For photo_resistor to work there’s one last thing that needs to be taken care of: the way the light falls onto the photoresistor.

By default, the light rays fall from all directions onto the photoresistor. This will end up polluting our readings and make the final captured image a blurry mess.

So, we encase the photoresistor with a cylindrical pipe with non-reflective interiors walls. The pipe will act like a reverse-laser, making sure that the light rays falling straight onto the photoresistor at ~90° angles hits it directly and the other rays reflect off the almost-non-reflective walls multiple times and die down in amplitude by the time it reaches the photoresistor. The actual pipe I used was a black sheet of paper rolled into a cylinder.

I got down to building each module individually and started putting them together like a car would get assembled on an assembly line. Everything fit in smoothly, servos would move on command, HC-05 would send the data as requested, the python script would receive the data just fine and render the received data. Perfect! I couldn’t have asked for more.

So, conceptually, if I turn the ignition key of my contraption on, it should just work, right? Wrong. The reality is a bitch, innit? While all of my LEGO-like modules were fitting in LEGO-like gloves, it dawned on me that hobbyist servos with hobbyist photoresistors present us with first-rate challenges! I realised that the photoresistor readings had considerable noise and laughable sensitivity. While this did not really come to me as a surprise, what did catch me off-guard though was that the servo had an offset of up to 10° between the time when the armature is moving up vs. the time it is moving down. This resulted in images that looked like how some of the old movies would look if they had the Interlacing Problem.

photo_resistor‘s rendition of A Starry Night
How the shitty servos screwed up my photo

I spent over a day figuring out what had gone wrong before finally concluding that this is an act of god over which I have no jurisdiction. So I turned to the golden answer any Indian would resort to in times of dire necessity, Jugaad. I just added an offset in my code to handle my servos’ atrocity. All in all, it worked out just fine as you can see in the second video above and I lived happily ever after.

You can find all of my code here.

Dance For Me! – Simple Ragdoll Physics

Giving life is obviously the best feeling there is. This is probably why mothers want to be mothers! Well, if you can’t afford to be a mother, like me, fret not, computer graphics can take you a long way. (I am not endorsing computer graphics over motherhood. Don’t sue me mothers!)

I am going to give a very brief introduction to building a physics engine that can power a ragdoll.. and henceforth, power your emotions! 😉 You can go ahead and make a game… or a dame (if you know what I mean… I mean nothing, you pervert)!

If you’ve ever thought that building a physics engine would be so cool when you were playing those physics based games, but again, thought that the implementation could be daunting, let me assure you that it is a very simple task (ahem.. watch out for my over-simplifications. No… Just kidding, it’s actually pretty simple). You’ll probably have a harder time to set the OpenGL environment up than to build this physics engine. You can find this part of setting up OpenGL in a lot of places on the internet. In fact, you can even find a few articles about building a custom physics engine as well. But I’ll go ahead and write my article and try to do a better job at introducing physics based graphics with OpenGL.

Let me start with the (not so) boring theory. You should know a little of Newtonian Physics to get a context of what this is about. You must be extremely uneducated to not know about Newton’s Laws. You shouldn’t even be here if that’s the case. Moving on, these equations should be familiar:

sn = sp + vpt + (at2)/2

vn = vp + at

We can of course use these equations to power our engine. Or, we can use something better! Verlet integration. Verlet Integration uses an approximated version of the Newtonian Equations to find updated positions of particles which are subject to our forces:

xn = 2xc – xp + a Δt2

n, c and p correspond to new, current and previous respectively.

Verlet integration is very popular. This could be because this equation saves us from the exponentially increasing t2 term which occurs in the former set of equations, which will potentially introduce errors as time increases. Also, this equation turns out to be better for the general implementation because sc and sp can be just swapped. Where as, in the former case, new positions and velocities must be calculated separately. This separate calculation of velocity can also introduce instability in the system.

Here are the things that you need to be ready with:

    • A simple OpenGL (of course you can use any graphics library) program that plots points in 2D
  • Zeal

This is just a proof of concept, so we’ll do away with 2D. Note that extending this to 3D must be easier than a cake walk. Let’s build that physics engine in tiny steps.

The Gravity

I’m assuming you have an array, vector or an equivalent to track the fucking points. Now let’s implement a part of the fucking equation to drive the fucking points down to earth. The fucking points need to learn the fucking lesson, don’t they? To do this, of course, we can implement only the acceleration part of the fucking equation:

xn += a Δt2

Where “a” can be –10 or whatever the fuck you want. We’d see something like this:


Of course, we can have any fucking number of force vectors, which can then be added to find the effective fucking acceleration of the fucking particles.


All this swearing must be enough to apprise you of the fucking  gravity of the situation!

A Trajectory

With the remaining part of the equation, we can give initial velocity of the particle. This will allow us to model a trajectory with initial velocity vector as the control parameter. Something of this sorts:


xn = 2xc – xp + a Δt2

If, xp was (0,0), xc could’ve been (1,1) to get an initial angle of 45º. You get the idea.


What we saw until now was finding the next step of a particle in a seemingly infinite universe without any limitations. They were just a set of points that were influenced by the forces that we modelled as if nothing else was there that influenced the movement. But this is not really the case in real life. There are a lot of interactive forces that act between infinite number of points. So, to model something like this, let’s put constraints into our existing engine. Of course, we’ll limit ourselves to a minimalistic set of points that are enough to make our animation look convincing enough. The flow of our program would be like this:

Yet again, let’s build the constraint system in small steps:

The Bounding Box

This is the simplest constraint to implement. Just check if the particle has moved beyond the box and limit it to the box.


The Distance Constraint

This is where we’ll see things getting interesting. Adding this constraint is a huge step towards modelling our ragdoll. As the name says, this step involves fixing the distance between any two points. For the sake of convenience, let’s say the points are A and B. We put a constraint that the distance between these two points must be say, d. If the new distance is d’ is different from d, then the delta, d’–d is found. Each point is pushed delta/2 away from or closer to each other based on the sign of delta. The direction of this push/pull is parallel to A–B.

Do keep in mind that this constraint is added to the existing set of constraints, The Bounding Box constraint. We see something like this:

Grand Finalè

Believe it or not (no, just believe it), what we’ve built until now, is enough to model a stick figure. So let’s build it and fool around with it! ?

Go ahead and make a simple stick figure with dimensions that please you. Put the distance constraints to keep it from collapsing into a singleton, and save yourself from the embarrassment of being called like a simpleton. Also, keep in mind that you’ll need a few distance constraints where you don’t really wanna draw a stick. You’ll hopefully understand it better from the GIF below.



You can of course explore other kinds of constraints, or structures of the constraints. There’s a lot of exploratory opportunity (…not that kind!), and there are a lot of things can be implemented just with the knowledge that we’ve gained here.

For instance, to implement dragging of the model, we can add a new constraint on some point. The constrain here would be that the point position is equal to the mouse co-ordinate. The other constraints ensure that the model moves with that one point that we’re moving.

And this can also do a manageable job at simulating cloth. Just set the vertices and constraints accordingly to generate a cloth and sim away!


This write-up and my implementation is inspired from the classic paper Advanced Character Physics written by Thomas Jakobsen.

Pat yourself, because you can be proud that you’ve just learnt something that was used in one of the best franchise games of the current world! Yes, this was used in a 2001 game, Hitman: Codename 47, for the first time in the gaming industry.

And now, time for a small pep talk… What we learnt now is just a tool. Remember, all the knowledge in the world is just that, a tool. The greatness lies upon the one who can make the best of them. I know, we’re all tired of people quoting random stuff and associating it with Einstein, but trust me on this one, I’ve got this from a reliable source: “Imagination is more important than knowledge. For knowledge is limited to all we now know and understand, while imagination embraces the entire world, and all there ever will be to know and understand.

Believe yourself to be an artist and you are one. And do not hesitate to take up unconventionalities.

I am an artist you know… it is my right to be odd.
E A Bucchianeri, Brushstrokes of a Gadfly

A Rant on the Government e-mails!!

I was going through some of my files and I came across this write-up that I had written as a part of the course, History of Graphics Design in Coursera. It seems [surprisingly] well written and was also lauded by my evaluators. I thought it was worth sharing! To set a context, I had to write about how the trends of advertising strategies have changed over time and I had to pick a case-study to analyse as well.
So, I chose this e-mail from the current Government of India. It was quite interesting to see that the current government has chosen to continue using technology, and not just limit themselves to during the elections. This is how a typical mail from the government looks like. You do not need to actually see what’s written to drive my point home. Just notice the amount of writing, and the placement of the graphic elements in the mail.


The following is me quoting myself:

The product that I’ve chosen is an e-mail advertisement from the Government of India. The National Informatics Center (NIC) has created the email. The email serves as an information brochure to the citizens of the nation. It’s very convenient to track the government’s progress and get the updates on the new schemes that the government proposes. This is never seen before in India and is very unlike the previous governments that were in power. From a different perspective, it serves as an advertisement to the current government. By the time, the next elections take place, they would have already spread their word and would have a stronger foothold on the population. 
We see a lot of national programs starting off in the country, and email is a very cheap mass communication means. This is obviously one strategy, the strategy of mass communicating. But what I want to talk about is an outdated strategy that they have adopted in their emails. If you have a look at the recent mail (attached), it feels very unprofessional. It is also  inconvenient to go through the mail with so much written around. The programs are defined and it’s functionality is over-elaborated, all cramped in one place. True that this email is to spread awareness about the national programs. We know awareness programs are taken up when the information spread has not happened efficiently. But how much awareness can you spread if the awareness program itself is inefficient?! An analogy can be drawn directly to the last video “Words doing the work”. Advertisers used to follow this strategy, but later the trend that followed, as we can see was that of a minimalistic style.
One probable reason they have used this strategy is because they do not have so much to sell, as to offer us  information. Although it is an advertisement of sorts, they can afford to not adopt conventional strategies because they already have our attention. We want to know what the government has to offer because we are paying our taxes and we need to read the benefits now. This is certainly different from the case where, say someone is trying to sell me a phone. They do not have my money yet, so they need to lure me in. They make it as easy as possible for me to buy their phone.
Another possibility is that because the Govt. of India is trying to provide us information and not sell something off to us, and also because the strategies are unimportant to heed to they don’t have to invest much on advertisements. Such poor budgets in advertisements would only allow them to hire poor ad-designers.
On the closing note, since they do not have much incentive to better it (unlike advertisers), I believe it’ll be a long time before we see a more conveniently readable informational emails/posters.

Help Yourself, Help Others

Gürbüz Doğan Ekşioğlu is a Turkish artist/cartoonist. One of his paintings got my attention.

Most of the analyses that I’ve seen go by similar captions as, “They let you think they are helping!”.

But from what I see, the efforts of the man on top are sincere. The following discourse conveys my thoughts on the same (Narrated by the man on top):

I want to help my people, but am I good enough? If I’m going to help I’ve got to do my best.. So I rise higher hoping to help better.. On and on I go until I’ve reached the top.. but alas.. I guess I’ve travelled a little too far.. and no more can I reach the very people I wanted to help.. 

Shows how a person with all the good intentions will not be able to achieve what he wants to because of the miscalculation of the goal. The goal here was to help the man in the trench. But the person outside [with all the good intentions!] raised to the top thinking that he could help better.

This also reflects how people want develop and hence let go of their roots to grow further in life.. but after a time they can hardly recognise their own roots. They’re just too far. It’s too sad if you can relate to it.

As a remedy, people should always keep in mind where they want to go in their lives. It’s a common practice to set sub-goals to achieve the bigger ones, but one should not be engrossed in over-achieving them. It’s the main goal that matters and it’s the main goal that always mattered. In this case, if the man outside wanted to help the man from the better position, he should’ve realised that going higher might not actually get him closer to his goal. Ignorance is not innocence and hence cannot be accepted as a reason to over-shoot. He should have checked every once in a while about what he wants. In other words, one should re-discover oneself once in a while whenever one feels lost. Not as easy as saying I realise, but one should always strive. On an entirely different note, one shouldn’t give up because perfection is impossible to achieve, but must strive towards it. Perfection is like infinity. One can only do a lim(x->∞), limit x tending to infinity. [For those who wants to take a bite at it, I know we can have systems where perfection is possible. Finding the square root of 4? 2. Perfect. No better solution than that! LOL]

When we’re talking about re-discovering our life goals, an analogy strikes me of the agile software development (we may not limit to software alone actually. You should read about it sometime if you haven’t already). The other models of software development usually involves taking the requirements and delivering the solution by the end of the term. But agile development involves checking and rechecking what is required and refining the path to the goal. This makes sure that the developers are not in an assumption that they’re on the right track and neither would the customer. This would save a bitter surprise for both the parties. Like the surprise of both the parties in this painting. A bitter surprise.

Music Videos in Slow Motion? What’s the big deal?!

(A random) Observation:
Did you realise that to sync the singer’s lips in a slow motion music video (Take Hymn for the Weekend by Coldplay for instance) is not so trivial.

During the shooting, the song has to played in faster than actuality to assist the singer in lip sync!

In other words, if one wants to make a slow motion video for a song, the audio must be at the same tempo, but the video is on a different “tempo“. This must be compensated! So, if I want the song video to be going at a rate of 0.75x, while shooting, the singer has to sing the same song at 1/0.75x or 1.33x.

Similar idea goes to music videos where the video is played faster than the song (Friday by Rebecca Black… er… rofl… no? haha). No hate please. She’s already had a hard time making the video probably!