Ravi Mohan's Blog

Friday, October 28, 2005

An Exercise In Writing Part 2

I have taken the rewriting for clarity exercise one step further.

The original paragraph is

"We have seen how interpreters can be used to model and explain the behaviour of programming languages. The explicit treatment of environments serves to explain scope and identifier lookup rules and the modelling of procedures as closures explains how procedures use lexical scope to make their behaviour independent of the environment in which they are invoked. Our interpreters are written in scheme, but are suitable for implementation in many other languages because they use ordinary data structures and procedure calls."

My best effort so far (without adding any extra information) gives

"We write interpreters to understand how programming languages work. How we design the components of an interpreter determines how the corresponding features of the defined language behave. For example, how we implement environments defines the rules of scoping and identifier lookup. If we model a procedure as a closure, we can grant it lexical scope, freeing its behaviour from its invocation environment.

All our interpreters use ordinary data structures and procedure calls. So we can implement them in any language (including Scheme)."

Thursday, October 27, 2005

An Exercise In Writing

(comments are still off)

I am making an effort to improve my writing. Here is a sample of what I have been doing.

In the following exercise, I take a murky paragraph(from EssentialsOf Programming Languages, First Edition,Chapter 8, "Continuation Passing Style") and try to systematically make it clearer. (The technical content of the passage is irrelevant. I am concerned here with sentence and paragraph structure).

The original paragraph

1."We have seen how interpreters can be used to model and explain the behaviour of programming languages. The explicit treatment of environments serves to explain scope and identifier lookup rules and the modelling of procedures as closures explains how procedures use lexical scope to make their behaviour independent of the environment in which they are invoked. Our interpreters are written in scheme, but are suitable for implementation in many other languages because they use ordinary data structures and procedure calls."

Getting rid of the nominalizations and passives gives

2." In previous chapters we learned how interpreters model and explain programming language behaviour. First, we focussed explicitly on environments to understand scope and the rules for identifier lookup. Then we modelled procedures as closures to understand how lexical scope enables procedures to have behaviour independent of the environments in which we invoke them. So far, we wrote the interpreters in scheme, but we could use many other languages because they all use ordinary data structures and procedure calls."

Splitting off the "writing in scheme" idea is into its own paragraph, and eliminating the irrelevant sequencing gives

3."Writing Interpreters is the best way to model programming languages and learn how they behave. When we focus explicitly on environments, we understand how scoping and identifier lookup works. If we model procedures as closures, we can understand how lexical scoping enables a procedure to have behaviour distinct from the environments in which we invoke them.

So far, we wrote the interpreters in scheme, but we could use many other languages because they all use ordinary data structures and procedure calls."

Now there is a concept gap evident between the first and second lines. Plugging that gap and rewriting the 3d sentence to eliminate the irrelevant "we", we get

4. "Interpreters model programming languages and explain their behaviour.To understand language features we need to study the structure of interpreters implementing those features.If we focus on environments, we understand scoping and identifier lookup. Modelling procedures as closures clarifies how lexical scoping makes the behaviour of a proceudure independent of the environment in which we invoke it.

So far, we wrote the interpreters in scheme, but we could use many other languages because they all use ordinary data structures and procedure calls."

I don't like the phrases "(lexical closures) make the behaviour of procedures independent" and "environment in which we invoke". So I replace them.

5. Interpreters model programming languages and explain their behaviour.To understand language features we study the architecture of interpreters implementing those features.If we focus on environments, we understand scoping and identifier lookup. Modelling procedures as closures clarifies how lexical scoping frees a procedure's behaviour from its invocation environment.

So far, we wrote the interpreters in scheme, but we could use many other languages because they all use ordinary data structures and procedure calls."

Now the paragraph is as clean as I can make it, but so far I have treated it as a standalone chunk of text with no relation to the rest of the chapter. I fix that (these modifications depend on understanding what the author intends to teach in this chapter), and smooth away the abruptness of the second paragraph, and remove an ambigous "they" (they = interpreters? languages?) to get the final version

6. Interpreters model programming languages and explain their behaviour.To understand language features we study the architecture of interpreters implementing those features.

If we focus on environments, we understand scoping and identifier lookup. Modelling procedures as closures clarifies how lexical scoping frees a procedure's behaviour from its invocation environment. To study recursion, exceptions, and other language features involving transfer of control, we should study an interpreter that explicitly maps control flow using continuations.

So far, we wrote the interpreters in Scheme and they all use ordinary data structures and procedure calls. So we could, in theory, use many other languages. In practice, we .."

At this point, I can't think of anything else to do (well I could change the rhythm of the sentences to avoid a staccato effect. But here I am concerned with sentence clarity not rhythm). So I stop.

Friday, October 21, 2005

Blog Lockdown

I have disabled this blog temporarily.There will be no new posts for about a month (or two) and comments have been turned off. Mostly, it is because I am too busy and will often be away from the net during this period. I also find myself increasingly dissatisfied with the pathetic level of my writing skill. Maybe a short break will give me some clues as to how to get to the next level..

I am available by mail. If anyone writes in, I will (eventually). respond.

Thursday, October 20, 2005

Balloons, Pins, Ego

A thought experiment.

Picture yourself holding a set of helium filled balloons, of various sizes and colors. They all have a string tied to their base and you hold all the strings in your hand.

Each balloon has a label on it which names one of your "attributes". Say "Indian","American", "good programmer", "employed", "iit" (;-)), "thoughtworker", "ai hacker", whatever.

Now ask yourself if you are holding those ballons very tightly , to the point where you can't let go (or can't bear to see others jab it) or do "you" exist independently of the balloons you hold?

I just did this experiment for myself , and found that I get irritated when people prick a balloon labelled "rational" . In other words, I get irritated when people accuse me of behaving irrationally, accuse me of implying things I didn't etc.

Of course this is stupid. My rationality or irrationality has nothing to do with whether people say I am irrational or not.

Fairly obvious huh?

I found this "which of my balloons is being pricked?" a valuable question to ask myself whenever I start feeling irritated, angry, cornered etc.

And then prise those fingers apart and let the balloon float free.

or confidently wait, knowing the ballon is unassailable and unbreakable.

Or just enjoy the "pop" and smile

Wednesday, October 19, 2005

IIT Grad == Excellent Programmer?

This is something I have never been quite able to resolve. Logic says (hey the IIT BTech guys apparently work through SICP early and if that doesn't upgrade your programming skills I am not sure what will) they should be better programmers. And Google India apparently hires only IIT Grads, so there is probably something to that argument.

On the other hand, in my experience, this "superiority" of IIT grads never really showed itself as a real world phenomenon. Most (but not all) of the really good programmers I see, seem to have a BSc (or other non-BTEch-CompSci) background, and are almost uiversally self taught programmers.

While I was in Thoughtworks, I even went through this strange phase in which TW interviewed about 40 IIT Grads after a company that used to hire only IIT grads went bust, and ended up making an offer to one guy (and he was very very good. I knew him from my Aztec days. He eventually went to July Systems)

It is all very puzzling. Because I do know many bright people from IIT but they are almost all in the USA. So maybe that is one explanation? The best folks from IIT go to the USA and the people a Bangalore based company interviews are probably the "lesser" ones of a batch? Especially in the "enterprise" space? I really don't know. But is there really such a huge difference in the capabilities of people who studied in the same batch?

Of the 40 or so people TW interviewed,(and didn't hire), one interview really stands out. There was this guy who claimed a lack of knowledge of "enterprise" coding but was, in his own words, a "specialist in Compilers and Mathematical/Scientific Programming". The interview team consisted of me(very very interested, and fairly knowledgeable in compilers) and a colleague, JK Werner who graduated in Mathematics.

Those days, In Thoughtworks, one of the guidelines for interviewing was "If people don't know something, that's fine but what they say they know, they better know and know well". So the candidate not knowing "enterprise" stuff was fine(It is all fairly simple anyway and an otherwise competent programmer can pick it up fast).So JK and I proceeded to have a conversation about compilers and math and the interview was .... terrible. This person was just mouthing "buzzwords" without having any deep knowledge.

question(me):- "Ok so after lexing and parsing you get an AST what do you do then?"

answer:- "hmm... I am not sure.." (his cv claimed he'd written a full fledged "parallel compiler")

question (jk): "Your cv says you have worked extensively with Vector Spaces, so here is simple question to start off. What is an Eigen value"?

answer:"hmm well I never got that far" (!!). (as per his cv he did all sorts of fundamental Linear Algebra related stuff)

This was the most disappointing interview in my life. Other interviewers narrated horror stories of "Senior Architects" who didn't know what "classpath" was!

So I am forced to conclude, being bright, and getting through IIT(those entrance exams are tough) and even working as "Lead" or "Architect" or whatever on large projects does not necessarily make you a good(forget great) programmer.

Also "enterprise" work and that too in India is probably not attractive to the average IIT graduate who has so many more interesting options.

So these days I just dismiss the educational background and look at coding skills exclusively.

Still, it is all very strange. If anyone has any insights, please enlighten me!


Joe Williams was kind enough to point out a possible misinterpretation (you can see Joe's comments in the comments section).

What I am saying

  1. I used to think IIT graduates (and students) were way above average in programming ability.
  2. I expected, given a fair (but tough) interview, about 35 (of 40) or so would get through. When only one did, I was forced to re examine my belief (see above)
  3. When I examined the best programmers I knew and their schools, I found that most(but not all) were BSc/non comp-sci graduates.
  4. This is possibly a perception issue. I am asking for clarification
  5. Logically , I now believe that programming ability and schools are not correlated

What some people think I am saying

  1. TW(India) is an uber cool company
  2. Anyone who doesn't get through in TWI is a poor programmer
  3. Most IIT folks we interviewed didn't get through
  4. Therfeore IITs suck and any IIT ians are poor programmers

Needless to say what I am claiming is the first list of assertions. Anyone who claims the second list as "true" has no clue.

Thanks, Joe.

Now I have more questions, if 40 MIT graduates interviewed (say in Google), how many would get through? The question remains, does your school have a correlation with the number of people who are excellent programmers?

Hopefully now things are clear.

Monday, October 17, 2005

And Miles To Go - Part One

This post could be titled "How Paul Graham's Writing Makes me Weep With Frustration With Its Elegance And Conciseness" but that is probably too long a title.

The point is still valid though. In his latest article, I find he has articulated in simple, crystal clear sentences, ideas I have been wrestling with in wretched futility for months.

Here is a simple example. When I went back to Thoughtworks for a visit recently, someone asked me "You have been doing some interesting ( == non j2ee, non enterprise) work for a year now So what did you learn?" And I said (among other things) "Math is important.". But then the thoughtworker asked "But isn't Math just another domain for the "analyst" to master? Any domain expert who deals with a domain will know what exactly he wants calculated and how. Why should we developers delve into the underlying theory? In other words isn't Math just a domain? "

I instantly saw that there were deep chasms dividing the world of enterprise software from the kind of code I write these days, but I couldn't find the words to bridge those chasms. I mumbled weakly that "Understanding math somehow makes your thinking better". Neither the questioner nor I were satisfied.

Now see how elegantly Paul expresses it(emphases mine). After saying that letting your mind wander is often a good source of ideas, he begins to wonder why that is so.

"....What happens when your mind wanders? It may be like doodling. Most people have characteristic ways of doodling.....Perhaps letting your mind wander is like doodling with ideas. You have certain mental gestures you've learned in your work, and when you're not paying attention, you keep making these same gestures, but somewhat randomly.In effect, you call the same functions on random arguments. That's what a metaphor is: a function applied to an argument of the wrong type....The habits of mind you invoke on some field don't have to be derived from working in that field. In fact, it's often better if they're not ... Are some kinds of work better sources of habits of mind than others? I suspect harder fields may be better sources, because to attack hard problems you need powerful solvents. I find math is a good source of metaphors good enough that it's worth studying just for that."

And there is what I should have said when people asked what good Math was. Math gives you more "primitives" to operate with and more ways of combining them. So do programming languages like Erlang and Lisp (which I was forced to turn to when the problems I was working on got too hard to handle with Java or Ruby). Math (and some programming langauges) are indeed "powerful solvents" that give you more ways of perceiving a problem, more choices on how to deal with those problems and overall a much richer field of possibilities.

Duh! I had to wait for Paul Graham to write that up and when I read the above quoted sentences I was shouting at myself "Exactly!! That is what I wanted to say"

Here is an ultra simple example of what I experienced. I needed to build a fairly complicated Neural Network with avery abstruse Training Scheme. For a long time I used the habits of thought I had picked up in years of "Object Oriented Thinking" and thought of a neural network as an "object having properties X and Y , with behaviours A and B , consisting of n Layer objects, each having properties blah and behaviour foo.. A Training Method is a Strategy Object that decides ..." and so on.

The people who I was talking to were primarily mathematicians and scientists and soon a communications gap yawned and we were all getting frustrated. Now I could have insisted they grok Objects or I could just learn the underlying math. I chose the latter and gritted my teeth, clenched my muscles and jumped into the Linear Algebra and Calculus needed to understand neural networks and .... found it quite interesting. Nothing like the nonsense poured down our throats in college. I was now using the best books, written by the most talented folks in te field, to learn, and I was applying the knowledge I gained to solve a very tough problem. And soon I saw that understanding Vector Spaces and Differential Equations allowed me to see neural networks in a new way. As I told a friend sometime later, a Neural Network is a set of Equations and every training scheme solves a problem in Topology. There is no way I could have come to that understanding without harnessing the underlying math. No amount of "Domain Modelling" or "Extreme Programming" or whatever would have given me the understanding needed to tackle the problem.

And that is why today when some people look at the work I do and say "dude , I don't want to study all this math stuff. Too dry for me. Lisp is too hard as well, leave alone all this concurrency stuff. But I want to learn AI too.I will just use TDD and Java", I just smile to myself. Unlike most "enterprise" work, in production strength AI (and most other "tough" fields I would imagine), not understanding the underlying domain deeply enough to acquire an intuition for what will work and what will not, just wastes time. Without the new perceptions opened up by grokking the "powerful solvents"(and it will take about a year of work to get to that point), one is just as blind as ever.

And it needed Paul Graham to make it all clear in about 5 sentences.

I have also come to see that the Analyst/Developer split is very dangerous, even in 'Enterprise' software and leads to substandard software. But that is a topic for another post.

Sunday, October 16, 2005

Yet another "apprenticeship pattern"

Earlier, I wrote about an 'apprenticeship pattern' I discovered.

Here is one more .

Be good at something other than Programming

If you plan to be good at programming, take up the practise of another discipline- music, painting, a martial art, woodworking, it doesn't matter - just pick one that interests you.

Many top notch programmers are very good at other things. Paul Graham is a painter. Peter Norvig is a mathematician. Eric Raymond is a Pistol Shooting expert.Richard Stallman says "My hobbies include ... international folk dance, flying, cooking, physics, recorder, puns, science fiction fandom, and programming;" (While I have never seen Stallman dance, people who have, assure me that he dances very well indeed)

While there are exceptions, (more on this below), I would be very skeptical about someone who claimed to be a good programmer ('hacker', if you will) and who is not skilled at something else as well.

Many people have written about this from different angles. Paul Graham says,

"...Hacking and painting have a lot in common. In fact, of all the different types of people I've known, hackers and painters are among the most alike.

What hackers and painters have in common is that they're both makers. Along with composers, architects, and writers, what hackers and painters are trying to do is make good things."

Eric Raymond says in How To Be a Hacker,

"... Train in a martial-arts form. The kind of mental discipline required for martial arts seems to be similar in important ways to what hackers do. The most popular forms among hackers are definitely Asian empty-hand arts such as Tae Kwon Do, various forms of Karate, Wing Chun, Aikido, or Ju Jitsu. Western fencing and Asian sword arts also have visible followings.... Develop an analytical ear for music. Learn to appreciate peculiar kinds of music. Learn to play some musical instrument well, or how to sing... "

The one apparent exception I have seen to this 'pattern' is when truly exceptional programmers seem to have no hobby (besides programming). My friend Anand Babu, would seem to be an example of this.

Anand spends significant amounts (or all) his free time writing code and is the creator of truly significant programs. If, like Anand, you are the author of code that makes a significant difference to millions of people, then you probably don't need this 'pattern'. Geniuses don't need to follow rules or patterns anyway.

Otherwise, in my experience, serious practice of music, for example, might help you become a better programmer than grinding through yet another J2EE app.

Monday, October 10, 2005

Gloom And Despair

Eight years ago, I was writing code like

public class PaymentProcessor {...}

A year and a half ago, I was writing code like

public class PaymentProcessor {....} (oh yeah, this time there was a class PaymentProcessorTest backing this up)

I was gloomily reflecting on those wasted years when I came across this snippet (from news.com)

"... Stanford University's Racing Team has accomplished a historic feat of robotics, finishing first in the DARPA Grand Challenge, a 131.6-mile driverless car race that no artificially intelligent machine has ever conquered before. ... Onlookers were wide-eyed watching the vehicles work their way through the extremely tricky course even though much of the race they could see only by wide-screen TVs in the spectator tent or by a real-time mapping tent.

For example, people in the spectator tent watched on with awe when Stanley drove over and down Beer Bottle Pass, which has 1,000-foot drops and hairpin turns. The packed crowd cheered when the car made it around the first switchback and then began chanting "Stanley, Stanley" as it drove down. .... "

Now that is real programming. While Providence was kind enough to set me back on track, I still can't get over how many years I wasted on "enterprise" code, essentially writing the same web->db->web routines over and over and over.

I am depressed.

Sunday, October 09, 2005

Back to Step One

I have played (ok, fiddled with) a steel string guitar for several years now. While I have a classical (nylon string) guitar and have worked through a good part of book 1 and book 2 of Federick Noad's classical guitar series, I never became very good at it. While I could play some pleasant sounding tunes, no matter how hard I tried, I could never sound like Julian Bream or Segovia. Their playing had a richness and lushness which I could never match.

So I put away the nylon string guitar and never used it much. I thought I was doing something wrong and even went to a teacher, but that didn't help very much, because while he gave me some good advice, I still didn't get the guitar to sound like I wanted it to. Gradually, the pressures (and the monotony) of a fulltime job meant that I ended up not playing very much at all.

Recently a friend sent me a copy of the Pumping Nylon DVD. I also happened to read George Leonard's fantastic book, Mastery. Combining Scott's guitar advice with the "Mastery" notion of maintaining total awareness and relaxation transforms the simplest steps into a discipline of fantastic depth.

Consider the simplest possible action on the guitar, that of plucking a string with a finger nail. Before, I would just pluck it and the note would sound. Scott suggests a four part motion - place the finger on the string, apply pressure, pluck, and "empty out" or consciously relax the plucking finger, each to be practised to perfection before combining them. Performing this as four distinct steps with the "mindfulness" advocated in Mastery is a very challenging exerscise.

Relearning the guitar is simultaneously easier and harder than learning for the first time. On the one hand, you know quite a bit already - how to move from one chord shape to another, how to play staccato or legato and so on - but on the other, everything you know is ever so slightly "off" or just plain wrong and thus demands a totality of focus to train your reflexes in the new grooves.

So there I am, plucking the same note over and over again, marvelling at how small changes in the angle of attack or pressure yield infinite variations on a single note. After a few dozen (hundred?) repetitions, there comes a moment when everything "clicks" and my finger flows on and off the string and a perfect, golden note shimmers in the stillness.

And for just that one moment, I do sound like Julian Bream.

Friday, October 07, 2005


The "code should be readable" principle, like any other, can be taken to extremes.

Recently, someone advocated replacing (junit's) assertEqual(5,obj1.getValue()) with Result.Of(obj1.GetValue()).ShouldEqual(5);

Yet another suggestion I heard was to replace assertTrue(x > y) with Should.be(true).that(x).greaterThan(y) !!!

A class called "Should" with a method called "be"?

The best reaction I heard was from a friend who said,

"I can write a program to substitute spaces with "." and put a "()" at the end of requirements written in English. That would be better than this lunacy. Thus I.need(true).someMoney() :-p"

Remember the language where you could just "express business requirements in English" without "developer speak" coming in the way? Yes, COBOL.

This ridiculous notion that code should look like English comes from a misunderstanding of the "Objects model the domain" principle.It also makes sense to those who have only (an imperfectly understood) Object Oriented Paradigm in their bag of tools and frequently those who can only "speak" Java (although I have seen this happen to some self proclaimed "smalltalk experts").

I once worked on a project where the "Chief Designer" was Object crazy and we ended up creating Objects that replicated all the components of a relational Database imperfectly. Thus we had classes like "Table" and"Query" right in the code, with all sorts of fancy tree creation (and tonnes of Visitor classes)to create simple "Select *" queries as objects. We even had a ridiculous "FieldedBusinessObject" class that was essentially a glorified Hashtable, with well over 300 fields and methods with the interrelationships laboriously hand coded. All in the name of "OOD" (and in a "100% pure XP" project too !!!). I pity the client who paid good money for this hogwash.

Genuinely understanding a paradigm means knowing what it is not good for, just as well as how to apply it effectively when it is suited.

Using an object structure to create a poor man's version of a very ambigous spoken language like English is a warning that the practitioner probably shouldn't be programming live systems in the first place. Doubly so when this misunderstanding is couched in terms like "Ubiquitous Domain Language", which have very precise meanings and context, totally unrelated to such foolishness.

If you must have a totally unrelated language embedded in your code, it is far better to learn how interpreters and compilers work and and how to embed sublanguages in your code (no,XML is NOT a good way to do this!) Yes, that means learning some "esoteric" computer science theory.

If you ever find yourself contemplating classes like "Should" with methods like "be" (coming soon to a framework near you! I am NOT joking), find the nearest wall, bang your head thrice on it, very very hard. Do this enough and you'll be ok! Or at the least you will soon not be able to do much harm.

Alternatively, the nearest walmart is probably looking for checkout clerks.

Wednesday, October 05, 2005

A Foobject is Made of Green Cheese - Sense And Nonsense in Technical Discussion

I have been hanging out on certain "technical" mailing lists recently.

This is a sample of the type of argument I hear.The actual concepts have been replaced with fictitious 'FooBars' etc to protect the guilty (and more importantly, the innocent).

" ...I hereby proclaim a new style of programming. State and Behaviour will be consolidated in one entity, these will be called Foobjects.No, they have nothing to do with Objects, though this may sound similair to uninitiated ears. Don't ask me for exact differences; but if you unwashed heathen must know, here it is - allFoobjects are named with a name that is exactly 17 letters long. Now coming back to more important things, Foobjects will be used to model the domain.

Before you name a foobject, touch your nose to the floor and draw the holy name "foo" on the floor 3 times. The contact between the tip of your nose and the floor will put you into the right mental state to perceive the behavioral characteristics of the Foobject. And woe betide anyone who dares to name a foobject without doing this Holy Rite Of Naming. And of course this is just my opinion. Nobody can ask me questions or ask me to justify this logically. But this is of course better than all existing methodlogies of designing software systems.I won't show you any code but of course any code thus produced will be superior. You'll just have to take my word for it. Any challenges to this will be treated as a personal attack.I will soon write a FooFramework and FooBook. Keep your wallets ready. ..."

Now in contrast, some extremely useful stuff.

Dale Emery On Naming Test Methods(slightly paraphrased to remove context)

" The primary principle I use to organize tests is The Principle of Rapid Fault Identification:

If one or more of these assertions were to fail, what organization would lead me as quickly as possible to the faulty code? ... I might want three separate test methods so that if one fails, the others can still run and give me further information. If I write these as one test method, the first failed assertion aborts the method, and I never find out whether the subsequent assertions pass or fail.

A sub-principle I use is The Principle of Independent Utility:

If the results of the assertions might be independently useful to me in identifying the fault, I organize them into separate tests so that one failure doesn't prevent me from getting the useful information from the others. If the other results are likely to be just noise, I organize the assertions into one test method so that the first failed assertion precludes the others and suppresses the noise.

I use The Principle of Rapid Fault Identification also for naming: If this test were to fail, what name would lead me as quickly as possible to the faulty code? As an example , I might name three test methods something like this:

GameIsNotOverWhenFirstCreated() CurrentFrameIsNotOverWhenGameIsFirstCreated() NoScoreWhenGameIsFirstCreated()

I might come up with better names with a little more thought (or at a time of day other than 5:30am). For example, I'd like to write "NotOver" into a positive. Perhaps GameIsInProgressWhenFirstCreated().

The best test namer in the universe is Brian Button. I've studied his test names, and they tend to have three parts:

For example: GameIsNotOverWhenCreated() - Desired Result: Game is not over - Coordinator: when - Conditions: first created

These few principles have given a big boost to my testing."

This gem of a post justifies putting up with all the nonsense you have to wade through. This has already improved the quality of my testing 100%. The condition-connector-result naming format is extremely useful in coming up with good test names.