tag:blogger.com,1999:blog-148530422024-03-08T06:17:14.800+05:30One Man HackingRavi Mohan's BlogRavihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.comBlogger159125tag:blogger.com,1999:blog-14853042.post-92027905032588431812008-11-27T21:06:00.006+05:302009-10-06T14:25:47.860+05:30Resurrection Part 2My new blog is <a href="http://pindancing.blogspot.com/">here</a>.Ravihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.comtag:blogger.com,1999:blog-14853042.post-14642170961657838962008-03-18T16:46:00.009+05:302008-03-18T22:21:44.568+05:30The Heart Must PauseI've been blogging for almost 5 years now. For the last few months, I've been feeling that this blog is more and more out of synch with who I am, what I care about and what I really want to write about.
<p/>
I started writing on this blog because, when I asked someone how to get good enough to write a book, I got the reply, "First learn how to write a blog entry, then write articles, then write a series of linked articles and then you are ready for the book".
<p/>
Good advice. What started as pure technical practice, along the lines of practising scales in music, turned into a medium for exploring and clarifying my thoughts. On the way, I gathered an audience and generated a "blog persona" (very unlike the real me but it was a lot of fun),met interesting women and got job offers. All good. The problem with developing such a "persona" or "voice" is that when your perspective shifts radically in a short time frame, it is hard to maintain continuity.
<p/> And so, and this is the last post on this particular blog and using this particular "voice". I do plan to write again, in the future, but whether such writing takes the form of a blog, and if so, whether I'll have an "open to all" blog, remains to be seen. If I do write, it will probably show up on my (rather bare now, but that will change, in time)<a href="http://www.magicindian.com"> website</a>.
<p/> Anyone who has spent time reading this blog and approved or (sometimes violently) disapproved, Thank You for your time. T'was a lot of fun.
<p/> "One Man Hacking" is now dead. R.I.P
<p/>
PS: for those who care about such things, this post's title comes from a poem
<p>
<i> <b>So We'll Go No More a-Roving</b> by Lord Byron
<p/>
SO we'll go no more a-roving<p/>
So late into the night,<p/>
Though the heart still be as loving, <p/>
And the moon still be as bright. <p/>
<p/>
For the sword outwears its sheath,<p/>
And the soul outwears the breast,<p/>
And the heart must pause to breathe,<p/>
And love itself have rest.<p/>
<p/>
Though the night was made for loving,<p/>
And the day returns too soon, <p/>
Yet we'll go no more a-roving <p/>
By the light of the moon. <p/>
<p/>
<p/>
</i>Ravihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.com7tag:blogger.com,1999:blog-14853042.post-14485946475169977832008-03-15T09:49:00.008+05:302008-03-15T13:38:56.786+05:30The Implict Assembly Line<p/>George Lakoff in his book "<a href="http://www.amazon.com/Metaphors-We-Live-George-Lakoff/dp/0226468011">Metaphors We Live By</a>", notes that we model our conceptual system on metaphors but are often unaware of how they shape our thoughts, speech and acts. In other words, we use one concept to explain, understand, articulate, and work with another concept.
<p/> For example, one metaphor that is very pervasive is "Argument is War". We use war as a metaphor in thinking about argument. Witness, "He <i>won</i> that argument" "His criticisms were <i>on target</i>", "I <i> destroyed </i> his argument"
<p/> The point George Lakoff makes is that argument really doesn't have much to do with war, except that we make it so by choosing war as an analogy. For example, argument could be (but often isn't) a "joint search for truth" or a "game" rather than a form of abstract combat with winners and losers.
Changing the underlying metaphor may change how you see something.
<p/> Once upon a time (but somewhat passe these days)the underlying metaphor for software development was "constructing a building (or bridge)" from which we still have "architects" and "engineers" , UML "modeling", and so on.
<p/> These days there seems to be a shift of metaphor. The underlying metaphor for much discussion on software development seems to be "Software Development (of the services variety) is an assembly line process".
<p/>
The so called "Lean Development" (or "lean agile" or whatever the fad du jour is) talks about "pieces of work" being "flowing" through a development "pipeline" and endeavors to "reduce cycle time" through "kanban" using the "Toyota Production System".
<p/>
Using this metaphor, software developers become equivalent to assembly line workers which managers or methodology experts "optimize". You can work on a "good" assembly lines like Toyota, where you have a degree of freedom to tinker with some of the details or a "bad" assembly line (uhh Wipro maybe? ) where you endlessly perform the same meaningless motions forever.
<p/>
There are "kanban" or "lean" or "agile" experts out there who will "advise" you on how the assembly line should work, but will never work on an assembly line themselves! The assembly line worker doesn't have any choice on what he will work on, how long he will work on a particular assembly line, practically never comes into contact with a real customer (at best there is a "customer representative" or a "usability expert"), is relatively easily replacable. "Requirements" from an analyst "flow" to a developer and then "flows" to QA dept and so on ad infinitum.
<p/> The funny thing is that most <em>developers</em> subscribe to this metaphor.
<p/> I choose to think of programming skill as just that. A skill. Just because you have a skill at metalwork(say) you don't have to work on a boring job on a third world assembly line. Developers who build startups step beyond the "assembly line" mentality. They choose what to work on, how to work on it, what technology to use, what customers they target interact directly with their customers and so on. Others choose to work on opensource kernels or compilers. Others are dragged kicking and screaming into Project manager-dom.
<p/>
Of course, you don't even have to make your money with your programming skills. You can be an investment banker (or astronaut or actor or musician) and do your brazing and welding on the weekends.
<p/> In my (totally personal) view, knowing how to program well is like being literate in a largely illiterate world. One day everyone (or almost everyone) will know how to write. But in the meantime you don't have to work as a scribe just because you know how to write. You could be a general or merchant or prince or cleric or warrior for hire or Dragon Slayer (umm ok note to self - less D & D) and your writing skills would give you a significant edge over your illiterate peers.
<p/>
Just because you are a good programmer (you are, aren't you ? ;-)) you don't have to work for that 200,000 employee outsourcing company 12 hours a day on systems you don't care about with "industry standard",read "middle of the road" technology.
<p/>Unless you choose to.Ravihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.com7tag:blogger.com,1999:blog-14853042.post-55452854272237220422008-02-11T15:22:00.002+05:302008-03-31T20:38:59.685+05:30DevCamp - A reportI am still "offline" but due to popular demand, here is a very brief report on Devcamp.
<p/>
Bangalore has long missed a conference for serious developers. In Bangalore you often find methodology based conferences (like the Agile confeences) or various company sponsored conferences which are thinly disguised propaganda pitches (Sun/Oracle/Microsoft Tech days for e.g). There weren't too many conferences where people who code (and like to code) could get together and exchange notes with likeminded souls.
<p/>
DevCamp plugs that gap very neatly. The organizers had explicitly filtered out the usual "hands on training on blub framework X or Snake Oil Methodology Y" type events which proliferate locally.The blurb for the event read "Please assume a high level of exposure and knowledge on the part of your audience and tailor your sessions to suit. Avoid 'Hello World' and how-to sessions which can be trivially found on the net. First hand war stories, in-depth analysis of topics and hands on demos are best". Consequently, most sessions were very compelling, with people demoing their pet code bases and showing off cool hacks. People projecting code (vs slides) and actually coding on stage, made this event a refreshing break from the usual slideware based mumbo jumbo. The unconference format was a particularly good fit, because you didn't have to distort your session to fit some arbitrary conceptual boundary.
<p/>
I couldn't attend as many sessions as I wanted. But I did catch Bejoy's's LinQ presentation (decent, though a bit shallow), <a href="http://karthiksr.blogspot.com/">Karthik</a>'s Erlang Testing tool (when this is finished it will bury JMeter) and <a href="http://www.jroller.com/viveksingh123/">Vivek Singh</a>'s session on <a href="http://www.codeplex.com/white">White</a> (brilliant, brilliant work!). The sessions I would have liked to attend but missed out on were the one on <a href="http://www.euindiagrid.eu/applications/biology">MOOSE</a> and <a href="http://siddhi.blogspot.com/">Siddharth</a>'s presentation on running linux on a Nintendo DS.
<p/>
Thoughtworks organized the conference with clockwork precision and everything flowed smoothly. It was great to be back at TW and meet my ex colleagues - which brought home to me (again) how much I miss working with bright opinionated people- aargh I have to fix this. . I am (very) glad I don't write enterprise software any more, particularly the outsourced variety, but I do miss working with bright people.
<p/>
So what could be improved? Not much really. One of the projectors didn't work with Ubuntu so I had to switch to Windows (bleh) for my presentation. My talk on monads seemed to go down well. Explaining esoterica like monads in a 30 minute session was an interesting challenge in communication, but it turned out ok. I had requests for a repeat session but I was too exhausted. - Some other day. The AC and the network conked out occasionally but these were very minor issues. There were a few too many (imo) "passive" attendees who wanted to listen more than speak, which is against the Xcamp ethos but I am hopeful that will change. Another thing future organizers need to watch out for is disguised product/recruitment pitches. There were at least one "you can code against our APIs and help us make more money" talk with zero technical content. Again not an issue for this conference , but something to watch out for in the future. All in all, a very refreshing change form the usual frivolous frothiness of the BarCamps. Not a single "blogger" or "seo marketing person" or "movie club member" in sight. Just developers and code. Bliss!
<p/>
Devcamp is something that Bangalore really needs. I look forward to the next one.Ravihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.com0tag:blogger.com,1999:blog-14853042.post-63361780885884710602008-02-03T21:55:00.001+05:302008-02-03T23:30:34.727+05:30[Ann] Blog Lockdown till March 15th 2008I am "away".
<p/>
Regds,
<p/>
Ravi
<p/> P.S.: I <em>am</em> attending Devcamp. Sorry for the confusion.Ravihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.com0tag:blogger.com,1999:blog-14853042.post-68956964995256164142008-01-27T15:07:00.000+05:302008-01-27T15:33:01.763+05:30About GRE scoresEver since I <a href="http://ravimohan.blogspot.com/2006/10/cracking-gre.html">published my GRE scores</a> on this blog, many people write to me asking me how to prepare etc. After writing the nth email with the same content, I thought I'll write the "official" answer here once and for all.
<p/>This is the "official" answer to "What advice can you give me on how to prepare for the GRE"?.
<p/>The answer is "Nothing"!
<p/> I skimmed the Barron's guide vocabulary section and the (16 iirc) sections (in the same guide) for the quantitative section to refresh my memory one day before the exam. I was overloaded with work then and had no time to prepare.
<p/> My exam was scheduled for 8 o clock in the morning and I was awake till about 05.00 working on a program. Since I got only about 2 hours of sleep, I was in this weird half asleep/awake state, which had the effect that I was totally relaxed and did not have the bandwidth to track the time remaining etc. I just answered each question as it came up. I got the very last question in the quantitative section wrong because for the first and only time I checked the clock and found I had like 3 seconds to answer and so I panicked and randomly clicked an answer(which turned out to be wrong). Oh well.
<p/>Beyond the above I have <span style="font-weight:bold;">nothing</span> to say on the GRE. Don't bother asking.Ravihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.com0tag:blogger.com,1999:blog-14853042.post-61373389928073482872008-01-25T21:19:00.000+05:302008-01-26T09:15:50.492+05:30Engineering - Some Working Definitions<i>This is the third in a series of four blog posts. Read parts <a href="http://ravimohan.blogspot.com/2008/01/ratcatchers-and-engineers.html">one</a> and <a href="http://ravimohan.blogspot.com/2008/01/so-whats-wrong-if-you-arent-engineer.html">two</a>, to get some context</i>
<p/>
Dr Douglas Lauffenberger's (from MIT's Biological Engineering Dept) <a href="http://ocw.mit.edu/ans7870/BE/BE.010J/s06/videos/ocw-be010j-07feb2006.mp3">talk </a>(warning - mp3) provided enough ideas for a (sufficient for <span style="font-style:italic;">my</span> purposes) working definition of "engineering".
<p/>
(paraphrase begins)
<p/>
<i>
Dr Lauffenberger begins by breaking down engineering into two aspects - science ( the study of things that exist ) and technology ( making things that don't exist).
<p/>
So,
<p/>
engineering = science (analysis - studying things that exist, break down into components, and methods of combining them) + technology (synthesis, building things by putting together the components identified by analysis ).
<p/>
Engineering further adds a "design principles" (how things get put together) focus to both analysis and synthesis.
<p/>
All engineers study mathematics. (This is a given).
<p/>
An engineering discipline has a base of science for its components and methods of combination. A branch of engineering picks a branch of science to base itself on. So, for example, Mechanical Engineering has a base of Physics and Materials Engineering has a base of Chemistry ( + Physics and the omnipresent Mathematics).
<p/>
Another way of thinking about engineering is
<p/>
"measure (properties of systems of interest), (use mathematics to ) model, manipulate (components and methods of combination, guided by the model) and make (things that don't exist)". --> (1)
<p/>
Yet another way to think about engineering,
<p/>
Engineering = mathematics + science + application area --> (2)
<p/>
There can be various combinations of (and subcomponents to) each of these three components.
<p/>
The "science" component in that equation needs to be manipulatable, quantifiable, modellable etc.
<p/>
</i>
<p/>
(paraphrase ends)
<p/>
Later in the speech, Dr L goes into why Biology only recently became modelable etc and so before that, how the various branches of BioEngineering used Physics, Chemistry etc as the underlying science, rather than biology. Biology was often the "application area", but not the underlying science. Thus at MIT biology would be a minor and other engineering disciplines like mechanical or electrical engineering (including comp sci like robotics and algorithms) would be applied to a biological domain like pharmaceuticals or prostheses.
<p/>
Dr L goes on to explain how this changed and why and how Biology is, these days, a science you can base an engineering discipline on (and you really ought to listen to the full speech), but for the purposes of this post (1) ind (2) are what I am interested in. ie,
<p/>
engineering = mathematics + science + application area and
<p/>
(doing) engineering = measure (properties of systems of interest), ( use mathematics to ) model, manipulate (components and methods of combination, guided by the model) and make (things that don't exist).
<p/>
I suggest that programming fits into the "model" part of things, complementing mathematics. This is just an insight, not rigorously tested etc, but
there are a couple of straws in the wind that make me think I am right.
<p/>
First, a scientist I work with explicitly identified the combination of programming and mathematical skills as a "force multiplier" that enables someone who has mastered both to zoom past someone who is strong only in one, explicitly mentioning a programmer (a genius at programming, way better than I am, who couldn't make as much progress as I could because he couldn't wrap his head around the "maths as a modelling tool" idea) and another person, a scientist this time, who gets stuck periodically because he couldn't write production quality code.
<p/>
There are analogues in enterprise programming where someone who has mastered a domain *and* programming can provide an order of magnitude more business value (which is the main metric in enterprise programming) than someone who knows only banking or J2EE.
<p/>
Richard Hamming says in his speech "You and Your Research" (if you haven't read this you really ought to do it right away!)
<p/>
<i>
"...
``How will computers change science?'' For example, I came up with the observation at that time that nine out of ten experiments were done in the lab and one in ten on the computer. I made a remark to the vice presidents one time, that it would be reversed, i.e. nine out of ten experiments would be done on the computer and one in ten in the lab. They knew I was a crazy mathematician and had no sense of reality. I knew they were wrong and they've been proved wrong while I have been proved right. They built laboratories when they didn't need them. I saw that computers were transforming science because I spent a lot of time asking ``What will be the impact of computers on science and how can I change it?'' I asked myself, ``How is it going to change Bell Labs?'' I remarked one time, in the same address, that more than one-half of the people at Bell Labs will be interacting closely with computing machines before I leave. Well, you all have terminals now. I thought hard about where was my field going, where were the opportunities, and what were the important things to do. Let me go there so there is a chance I can do important things.
..." </i>
<p/>
If that were true in 1986, when Hamming made his speech, how much more true is it likely to be now?
<p/>
The mistake most scientists make is to consider programming a "blue collar" activity, not worth focusing on (this might be true for the top 1% or so whose native genius will carry them through, but most scientists I know can use all the tools they can get). I hypothesize that in the 21st century a scientist (or engineer) who can't code is handicapped - not so badly as a poor public speaker or a poor writer would be (and both are very vital skills for a research career), but handicapped nonetheless. Maybe there is a case for making (say) SICP + python + algorithm analysis + usage of basic version control tools a part of the science curriculum.
<p/>
The mistake most programmers (who loathe their cubicle farms and the brain dead enterprise codebases they maintain) make is to think that research and engineering are somehow beyond their ability to tackle. One crucial contributing factor to this perception is that most people learn mathematics as a bunch of formulae to memorize for an exam than as a powerful modeling tool that penetrates and simplifies complex systems. It doesn't help that, in India at least, the engineering and science educational system is fundamentally broken and emphasizes rote learning and obedience to authority over curiosity and intellectual rigor.
<p/>
I write my blog to help me clarify my thinking. I couldn't care less if no one reads it. (Having said that, I have "met" some brilliant people through the blog). This and the two preceding blog entries came about because I have been struggling with nailing down a research statement - something beyond the current "I am interested in Robotics and Compilers". One of my mentors asked me to do this. I think this is good advice. More on this in the next (and last in this series) post.Ravihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.com10tag:blogger.com,1999:blog-14853042.post-51028304988942127822008-01-24T09:27:00.000+05:302008-01-26T09:07:09.219+05:30So what's wrong if you aren't an engineer?Nothing at all! To quote Reg Brathwaite (again! But the man has a way of using words that is very eloquent. I can't resist).
<p/>
<i>[This is the second in a series of four blog entries. If you haven't already you might want to read <a href="http://ravimohan.blogspot.com/2008/01/ratcatchers-and-engineers.html">part one</a> first.]</i>
<p/>
<i>
What's wrong with being a clerk? Nothing. It's only a problem if deep in your heart you despise clerks and you spend your life in denial about the career you have chosen. I wouldn't wish that on anyone, so I asked my readers to think about that carefully.
<p/>
Likewise, we can argue about what activities from programming can or cannot be considered Engineering. But really, even if you don't do any Engineering, what's wrong with that?</i>
<p/> Heh!.
<p/>
My last blog post seems to have ignited a mini firestorm. Observing how people react to an idea is sometimes more fun than the original idea itself. Reg <a href="http://weblog.raganwald.com/2008/01/are-you-engineer.html">gets it</a>. In one of his replies to a comment (do read his blog entry), he says
<p/><i>
"the statement “~p implies ~q” says nothing about whether p implies q."</i> .(typo corrected. Thanks Arne!) <p/>This is a bit more subtle than it seems. Try substituting p = "You use mathematics" and q = "You are an engineer") and try working out "~p => ~q" and " p => q" ( => is implication and ~ is not). You may be surprised !
<p/>
I am continually astounded at how many people respond to arguments or claims without doing a logical analysis of what's being said. (Note: I am using "argument" and "claim" (and other words like "theory") in the logical/scientific sense, NOT in the " I had this 3 hours argument with my wife. She asked me to wash the car and I refused. At the end I was shouting and she was in tears. My theory is that women are a different species" sense).
<p/>When I was a debater, in my younger days, one of the lessons I learned early is to understand that you don't counter an argument from your emotional or "gut" level, by calling your opponent names, or attributing motives to him (unless you are trying to be a politician, when these tactics do pay off).The way to counter an argument is to dissect its logical structure, and show it is invalid (in specific contexts, if required). Rhetoric by itself can be powerful (and many politicians know this), but when layered on top of a logically sound argument is devastating. There are very precise ways of doing this, going back many centuries, at least as far as Aristotle and Plato.
<p/> In my last blog post, I made the claim that most software developers are not engineers. Here, I'll make another claim, even more provocative. <span style="font-weight:bold;">Most software developers don't understand logic either</span>. You think I am wrong? Quick, (assuming you are a software developer) what is the difference between the "if .. then" construct in programming languages like java and the logical "if ..then" (aka implication, often denoted by =>) ? If you, a software developer, answered correctly without having to think about it, rest assured, you are in a minority.
<p/>
<p/>Using logical implication you can say(assuming we are talking about this Earth and this time stream) "If Napoleon Bonaparte was born in Europe, the Sun rises in the East" and have it evaluate to true. But of course. What's so surprising? "If Napoleon Bonaparte was born in India, the sun rises in the West" or "If Napoleon Bonaparte was born in India, the sun rises in the East" also evaluate to True! :-D. ("If Napoleon Bonaparte was born in Europe, the sun rises in the West" evaluates to False.) What does the birthplace of Napoleon Bonaparte have to do with where the sun rises? ;-)
<p/>Confused? heh! Don't worry it is a most people get totally zonked when they see this example for th first time. The key is to realize that implication is not causation. (neither is correlation but that is another topic. See this <a href="http://scruffylookingcatherder.com/archive/2008/01/22/tdd-proven-effective-or-is-it.aspx">debunking</a> of a claim that <a href="http://haacked.com/archive/2008/01/22/research-supports-the-effectiveness-of-tdd.aspx">"research supports the effectiveness of TDD"</a> to see an example of correlation vs causation - The trick I pulled is of course that many variants of the English "If.. then" are different from the logical "if ..then". [1]). Many people learn the truth table of implication without really internalizing what it means.
<p/> When I teach programmers first order logic, this is a constant stumbling block. The solution is simple. I ask them to think of the logical "if X then Y" (where x and Y are booleans or boolean valued expressions) construct as equivalent to a programmatic "If X then Y else True". The "else True " is key. In other words, (thinking programatically) does X have a value of true? if so return (the truth value of) Y else return true. Apply this to the "Napoleon" arguments and you'll get the correct (logical) answer for all possible combinations.
<p/>
Why is such a confusing notion important? Proofs are logical structures using the primitives of FOL. A large part of mathematics(and science) is proofs. From science comes engineering. and if you are not using mathematics you are not an engineer (ducks for cover ;-)).
<p/> There are 5 connectives (not, and, or, if (or implication) and iff (or double implication) ) and two quantifications (Universal and existential) in First Order Logic, which need to be mastered before one can go on to things like proofs and logical structure. That's the bad news. The good news is that working through a book on logic (and there are PLENTY of those) will teach you how to use logic.
<p/> Reg goes on to say (in the same comment stream)
<p/>
<i>
I remind everyone that "exists x such that x ~member of E does not imply that for all x, x ~member of E."
<p/>
In other words, the fact that not using math means you aren't an Engineer does not imply that using math makes you an Engineer, for whatever definition of math we agree on. </i>
<p/>
Exactly so!
<p/> The point of the last blog post wasn't "you suck you enterprise developer subhuman moron", but "Don't delude yourself". No more, No less. Do what you love. And have fun!
<p/>
PS: Once you know logic, using to to construct (or deconstruct) an argument is trivial. But for those who want to make sound arguments without necessarily studying "raw" logic (I hope you are not a sw dev ;-)), take a look at <a href="http://www.amazon.com/Craft-Argument-3rd-Joseph-Williams/dp/0321453271/">"The Craft of Argument"</a> by Joseph Williams and Gregory Colomb.)
<p/>
[1] <a href="http://en.wikipedia.org/wiki/Paradoxes_of_material_implication">Wikipedia</a>
<p/>
Part Three of this series is <a href="http://ravimohan.blogspot.com/2008/01/engineering-some-working-definitions.html">here</a>.Ravihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.com11tag:blogger.com,1999:blog-14853042.post-83541680337756937742008-01-23T09:16:00.000+05:302008-01-26T09:10:27.703+05:30Ratcatchers and EngineersEver wondered if what you are doing is (software) engineering? Here is a heuristic.
<p/>
If you don't use mathematics in your <span style="font-weight:bold;">day to day</span> work, you aren't (an engineer). All engineers (say those who build bridges, or space craft, or cars) make heavy use of mathematics and/or hard sciences like Physics on a regular basis.
<p/> Now, not being an engineer is ok. Being a carpenter or a plumber is a perfectly honorable choice as is being a musician or actor or teacher. If you enjoy being a carpenter/plumber/automobile mechanic, more power to you. You should do what makes you happy and puts bread on the table. That said, a craftsman is not an engineer. The guy in the garage who fixes your engine is not an automobile engineer who could design the next generation car. Not close.
<p/>
This insight was triggered by Raganwald's <a href="http://weblog.raganwald.com/2008/01/no-disrespect.html">"No Disrespect" blog post.</a> I quote
<p/>
<i>...Let me tell you the cold, hard, truth. You aren’t going to like this, but I ask you to believe me when I say that I am telling you this for your own good:
<p/>
There is a culture of pretending business programming is more than it is. Some of you calling for more Java in University may take false hope that I am on your side. You may think that the people arguing for Scheme, Haskell, and OCaml are elitists. Wrong. They do not have a problem. You are the one with a problem because you don’t want to tell all your friends you have a job as a clerk.....</i>
<p/>We all know what the typical software "engineer" job ad looks like. A job ad for a real engineer would look like <a href="http://www.allthingsdistributed.com/2007/07/job_opening_for_a_senior_resea.html">this</a>. (Noe the absence of "10 years in java/dotNet/Ror" type crap. Note that he explicitly asks for a Phd (and tell you under what circumstances he will waive it)
<p/>
The distinguishing trait of an engineer (and Werner's job description explicitly ask for this) is that he builds and works with mathematical models to design a real world effect or system. They also use other tools(simulations, prototypes, experiments etc) but (mathematical) modeling is <span style="font-weight:bold;">key</span>. A scientist,as distinct from an engineer, uses roughly the same tools to advance the state of knowledge without necessarily affecting the real world. The borders are fuzzy. You have scientist-engineers and engineer-scientists, as well as people who focus on "pure"
science or engineering.
<p/>"Modeling" is a deep topic. Read the book I've referred to at the end of this blog for examples of how this works. Suffice to say If I am not building (say) algorithmic models to help me decide how to build my software, or to generalize, if I am not using "theory" on a day to day basis, I am not *engineering* anything. Modeling has a very precise meaning in Engineering and Science. (No, UML diagrams, or "story cards" are not engineering artifacts no matter what the methodology vendors say ;-) ).
<p/> Enterprise software is the least amenable to the modeling/engineering approach. There are exceptions but most "enterprise" developers are the equivalent of clerks, as Raganwald so eloquently points out. There is nothing wrong with being a clerk as long as you <span style="font-weight:bold;">know</span> you are one and are not deceiving yourself. Most enterprise software projects, in keeping with their clerical nature, are <a href="http://ravimohan.blogspot.com/2006/07/but-martin-enterprise-software-is.html">life draining</a>. But hey if you like it, go for it.
<p/>
Another friend of mine, who is a <span style="font-weight:bold;">very</span> talented programmer (he recently moved away from enterprise app development) , when asked why he changed the focus of his career, told me,(paraphrased) "Humanity has only a very limited amount of talented people. It is a crime against humanity to employ that talent to bug fix enterprise applications or futz around with Ruby On Rails deployment issues. I want to do something meaningful". (As you can see I have interesting friends :-) ).
<p/>To conclude, the title "Software Engineer" is (most of the time) a particularly deceptive one. To be accurate it should something like "Software Maintenance Worker" or "Software Handyman", but I guess it is easier to hire someone if his job title is "Rodent Officer" instead of "Ratcatcher".
.
<p/>PS:- Please, before anyone feels offended and sends me hate mail or snarky comments, please read (something like) <a href="http://www.amazon.com/gp/product/0262731428/">"The Idea Factory - Learning to Think at MIT"</a>. (This was the book that jolted me out of my complacency and set my feet firmly on the research/engineering path). Then actually think about it :-). Either be happy as a rat catcher or do something about it.
<p/>
<i>
[This is the first in a series of four blog posts. Read <a href="http://ravimohan.blogspot.com/2008/01/so-whats-wrong-if-you-arent-engineer.html">part two</a> and <a href="http://ravimohan.blogspot.com/2008/01/engineering-some-working-definitions.html">part three</a>]
</i>Ravihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.com41tag:blogger.com,1999:blog-14853042.post-83705147363977415832008-01-16T21:49:00.000+05:302008-01-19T04:57:16.893+05:30Dev Camp BangaloreSome folks at my former employer, Thoughtworks, are planning a <a href="http://devcamp.in/wiki/Main_Page">DevCamp</a> in Bangalore on February 9th, 2008.
<p/>This is an excellent idea, well overdue. Barcamps in Bangalore are overrun by people trying to finance the next "social networking" (gag!!) or "web 2.0"(double gag) "startup" by finding some dumb non-technical vc who doesn't know what's going on, photography and movie clubs and so on. And what technical sessions do exist are of the "Introduction to X" variety where X element of { Ruby On Rails, Erlang, Fad-du-jour} mostly cut and pasted from the web.
<p/> The Dev Camp web page has these instructions prominently displayed (emphasis mine).
<p/>
"However, please assume a high level of exposure and knowledge on the part of your audience and tailor your sessions to suit. <span style="font-weight:bold;">Avoid 'Hello World' and how-to sessions which can be trivially found on the net</span>. First hand war stories, in-depth analysis of topics and live demos are best."
<p/> This should keep the fakes away (touch wood). Also there will be a few participants from Thoughtworks, and if the past is any guide the Thoughtworkers sessions will be well worth listening to.
<p/> One of the tenets of a "camp" is that there are no passive participants. Given that this is meant for developers, I hope to attend some high quality sessions. And if I attend, in the spirit of Xcamp,
<p/>
I'll be presenting too. I will speak on <span style="font-weight:bold;">one</span> of
<ol>
<li>Monads in Depth - what they are, the underlying mathematics, what they are good for and how to use them in your favorite language</li>
<li>Reinforcement Learning - Algorithms and Applications in Robotics</li>
<li>Proof Techniques - a tutorial for Developers</li>
<li>Vajra - a high performance Lisp for Robotics Programming</li>
</ol>
<p/> Of course the topic list is highly fluid and I just might end up speaking on something else, in keeping with the spirit of XCamp, but if you have a preference, send me email!
<p/> if any readers of this blog attend, stop by and say hello!Ravihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.com8tag:blogger.com,1999:blog-14853042.post-50163920467053051892008-01-01T10:16:00.000+05:302008-01-03T09:04:23.291+05:30Sliding Into ScalaI've been dissatisfied with java for a while now, but to give the devil his due, it does hit a sweet spot. When you are looking for a combination of cross platform, fast, statically typed, easily deployable language with tonnes of libraries, there isn't much else available. But on the other hand the language itself is mind numbingly verbose and since most of my coding these days is in a combination of scheme and C, when I do switch back to java, it is as if I am suddenly running through quicksand, eclipse notwithstanding.
<p/>
Generics (the implementation not the idea) was the first misstep in Java's evolution. It makes it easy to write code that is almost impossible to read. For the AIMA code for example, when I used pure Java 5, generics heavy code, students complained that they couldn't make out what the code was doing and so for code that other people have to maintain or extend, I end up writing Java in a style I call "1.4 +" - Java 1.4 + enums + new for loop + generics *for collections* (only). The actual type system of Java 5 is a fairly simple (even simplistic) one (once you've worked through some books like <a href="http://www.cis.upenn.edu/~bcpierce/tapl/">TAPL</a>), but the syntax is atrocious. The present controversy on "closures" (closure != anonymous function and I am tired of programming illiterates abusing terms with clearly defined meanings but that is a rant for another day) convinced me that Java is on the wrong track and is well on its way to obsolescence. "Java is the new Cobol" indeed.
<p/>
What COBOL never had was an open source cross platform VM that other languages could target and a few hundred thousand libraries. Java != JVM, in other words. Couldn't I just use another language on the JVM ? Well yes, but - I don't particularly like Ruby. Jython hasn't caught up with the latest version of Python. Both these languages have dynamic type systems and are slower than compiled java. So there isn't too much actual choice given the parameters I listed above (fast, statically typed ... ).
<p/>
Enter <a href="http://www.scala-lang.org/">Scala</a>. I looked at Scala a year ago, tried some sample scripts, stumbled across a bug and gave up. But things have changed since then. I've been experimenting with Scala over the last few days and I am tremendously impressed. Besides writing small scripts to explore various language features, I've been rummaging through the code for the compiler and type checker (written in Scala of course). Martin Odersky has something that is all too rare in the software world today - a strong sense of design (I am looking at you, Ruby On Rails).Scala has too many brilliant features to go into any great detail here, (there are plenty of blogs that do go into detail see <a href="http://unenterprise.blogspot.com/2007/12/no-seriously-why-scala.html">this</a> or <a href="http://technically.us/code/x/the-escape-hatch">this</a> for e.g) but in essence code compresses to almost nothing, the type system is brilliant, and pattern matching is something I've wanted in a jvm language for a long time. One can even do monadic programming in Scala! I am VERY impressed.
<p/>
Now don't get me wrong, there are a few rough edges. For example one of the things it doesn't have (as of this moment) is a good reflection API. I tried to write a xunit clone to get my feet wet and got stuck on identifying all the methods that start with "test", for example. (It *can* be done by leveraging the underlying Java but not from native Scala). The build system is a mess (try building from source and watch the build fail for lack of heap space - this is unforgivable in 2008) and there is less of a focus on testing/regression than I'd like. And oh, if you are using the interpreter make sure you use jline ("sbaz jline" should do it- I don't know why this isn't part of the standard distribution).
<p/>
But these are minor quibbles - Scala is a brilliant accomplishment. I think Sun should just declare Scala to be Java 8 and be done with it.Ravihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.com16tag:blogger.com,1999:blog-14853042.post-1452986165517666872007-12-28T11:28:00.000+05:302007-12-28T11:31:38.255+05:30Managers consume what?from
<a href="http://blog.hbs.edu/faculty/amcafee/index.php/faculty_amcafee_v3/how_to_hit_the_enterprise_20_bullseye/">this site</a>,
<p/>
<i> ....managers are voracious consumers of theory. In other words, they value ways to think about their world, and mental tools that will let them make decisions and predictions with a level of confidence higher than they get from experience and intuition alone.</i>
<p/>
Ummmm. Ok. :-PRavihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.com2tag:blogger.com,1999:blog-14853042.post-76964944402270194392007-12-16T19:55:00.000+05:302007-12-19T22:44:51.206+05:30Teaching MathematicsI've just taken up an assignment to mentor one of my friends in maths/proof/logic.
<p/>
My "student" has no background in maths or logic (if you ignore the negative knowledge gained by sitting through a bachelors degree in Engineering in India's f***ed up educational system), but is a good programmer.
<p/>
The expected post condition is that my "student" will be able to read and work out mathematical proofs, design an experiment, analyze an existing research paper, work out an algorithm for a given task, understand basic calculus and linear algebra and in general gain the ability to conduct independent research in Machine Vision. I (and some others, like my friend Rajesh, for e,g.) have worked out some of these "how to"s over the last few years, mostly by brute force search of the landscape, discovering pitfalls by falling into them (and making a careful map, which it is now time to pass on).
<p/>
"How to work out a proof" for example is rarely taught (well) in a classroom setting and I have had to synthesize knowledge from many sources to come up with a workable technique. I can save my "student" hundreds of hours of work.
<p/> Teaching is how one crystallizes one's knowledge. I've always enjoyed teaching but what <a href="http://ravimohan.blogspot.com/2006/01/to-teach-is-to-learn-twice-and-more.html">I've done earlier </a> was in a many to one to many format, involving speaking to a group of people in using a blackboard to derive a proof or a projector to display code.
<p/>
The one to one format will be a new experience. I spent a few hours today working out a "syllabus" and it looks very good, even if I do say so myself :-)
<p/>
Update : I plan to teach first order logic and proofs first, probably using books by Velleman, Polya, and Doets and Van Ejck.Ravihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.com7tag:blogger.com,1999:blog-14853042.post-91535916082489614952007-12-13T22:14:00.000+05:302007-12-13T22:41:45.395+05:30You can't catch upOne of my friends recently told me (I have interesting friends :-)) "Every time I meet you, you have moved further ahead in [area of focus], so how do I catch up with you?"
<p/>
The answer == "You can't, unless (a) I stand still and wait for you to catch up - Why should I? (b) You figure out how to move a few hundred percent faster than me (Good luck with that)"
<p/> Imo, the right thing to do with life is <em>not</em> waste cycles "catching up" with other people. I'd rather do something that is uniquely <em>mine</em>. Competing with oneself is hard enough without wasting bandwidth on what other people are doing or how they are "drawing ahead" or whatever.Ravihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.com2tag:blogger.com,1999:blog-14853042.post-83496128001509911062007-12-13T14:31:00.000+05:302007-12-16T20:23:45.952+05:30Robotics and the future of warfareI was speaking to one of my friends doing research in robotics in a major American university when it struck me that most of the researchers in robotics and machine learning are gentle pacifists who wouldn't want to hurt a fly, while one of the greatest uses of robots is going to be in real battlefields.
<p/>
And I am not talking about robots engaging in non lethal activities like scouting or mine clearing, though that's how robots will initially appear on the battlefield. I am talking of robots (on land ,sea and air) armed with guns and other weaponry and increasingly deciding for themselves when to fire and what to fire at.
<p/> I believe that in wars of the future two major features will be asymmetrical war (what we call "insurgency" -blurring the boundaries between "civilians" and "soldiers") and increased participation by increasingly sophisticated autonomous robots/remote control weapons. (see <a href="http://www.exile.ru/articles/detail.php?ARTICLE_ID=6691&IBLOCK_ID=35&PAGE=1">this column</a> for some thinking along these lines). I'll ignore the first aspect and focus on the second, in the interests of keeping the size of this post manageable.
<p/> Here is how robotics will develop in war. First, robots will engage in non lethal activities like mine clearing or I.E.D detection. (This is happening today) Then you'll see them accompany human combat units as augmenters and enablers on real battle fields. (This is beginning to happen) As robotics gets more and more sophisticated, they will take up potentially lethal but non combat operations like patrolling camp perimeters or no fly areas, and open fire only when "provoked" (This is beginning to happen too). The final stage will be when robotic weapons are an integral part of the battlefield, just like "normal", human controlled machines are today and make autonomous or near autonomous combat decisions. These robots will look nothing like the vaguely humanoid robots of Star Wars - they will have shapes suited to their roles - think tiny dragon fly / cockroach like systems to computerized tanks and helicopters.
<p/> And this type of robotic battle system will not be confined to the major superpowers. As of today, what gives robots an edge is not really dependent on hardware, but on sophisticated software, something that does not depend on superpower status. Someone in Korea or India or China, or even a semi military corporation can gain a temporary edge in robotics software comparatively easily (though the <a href="http://en.wikipedia.org/wiki/Red_Queen">Red Queen Effect</a> will kick in soon enough to close the gaps) sufficient to tip a battle or a war one way or another.
<p/> The funny thing is that quite a bit of the required technology is available, sometimes in embryonic form, <em>today</em>. "The future is here, just not widely distributed" - as William Gibson said. I <em>know</em> that the Indian defense forces have made substantial progress in battle field robotics systems (I can't be more specific - the details are classified so don't ask! :-D ), and what India is on the verge of doing, others are, too.
<p/> None of this is to claim that future combat will occur between machines, all clinical and detached with no blood being spilled. On the contrary.
<p/>Historically, every innovation in technology has been hailed as bringing in an era of bloodless warfare. When the airplane was invented, for example, many columns were written on how they made war obsolete because, obviously, with planes roaming the sky no army could make unobserved movements and so war was now futile! Dresden and the Blitz and Hiroshima were just around the corner.
<p/>
My prediction is that war will become more lethal, with increasingly sophisticated autonomous weapons systems and ever more vicious asymmetrical warfare twisting the business of making "the other fellow die for his country" (or ideology or religion or ...) into newer and ever more horrific forms. And that is something that all of us working on robotics or machine learning should think about.
<p/>
Update: <a href="http://blog.wired.com/defense/2007/08/httpwwwnational.html">First Armed Robots deployed in Iraq</a> . While these are not really autonomous systems (they are remote controlled) and substantial technical challenges remain on the path to truly autonomous systems, people conducting research in robotics know how fast these barriers are falling. Those who are not researchers in robotics/ai may mull over the progress in autonomous navigation of vehicles from 2004 to 2007 as exemplified by the <a href="http://en.wikipedia.org/wiki/DARPA_Grand_Challenge">DARPA Grand Challenge</a>. Now extend that rate of progress to areas other than navigation.Ravihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.com9tag:blogger.com,1999:blog-14853042.post-49890823389557047702007-11-26T16:05:00.000+05:302007-11-26T16:16:45.057+05:30Books for 2008These are the books I'll be working through in 2008.
<p/>
<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiXoyR0uso7d2dUfcfdeuA8XPtYkGcA8wxNnZp-NpzvAscDDWHhMl768jasg-M90TWI3HUhmwCSs_VEo1w775q-NlXYHKB1DdXX_ySVKx77Ig1snRc245p9zkUTy9KD8OAlGjFN/s1600-h/ndpbook.jpg"><img style="cursor:pointer; cursor:hand;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiXoyR0uso7d2dUfcfdeuA8XPtYkGcA8wxNnZp-NpzvAscDDWHhMl768jasg-M90TWI3HUhmwCSs_VEo1w775q-NlXYHKB1DdXX_ySVKx77Ig1snRc245p9zkUTy9KD8OAlGjFN/s320/ndpbook.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5137097240969255538" /></a>
<p/>
<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjCNAjkrLpwxS-Svl5vMf2GP5os8309-gwWoG_rJMfXUfLP0Y3gG5Wlids9u0QXpwncTe6csQJeiGODR5nj5H0V-KbAY6YjZ3_mrEPU_NcJVJA1Hs3wKV7_m5oL7mWoG_f7vLUe/s1600-h/cormen.jpg"><img style="cursor:pointer; cursor:hand;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjCNAjkrLpwxS-Svl5vMf2GP5os8309-gwWoG_rJMfXUfLP0Y3gG5Wlids9u0QXpwncTe6csQJeiGODR5nj5H0V-KbAY6YjZ3_mrEPU_NcJVJA1Hs3wKV7_m5oL7mWoG_f7vLUe/s320/cormen.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5137097717710625410" /></a>
<p/>
<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi8CAahOum0rk1F03hckt8TYJlFYpRyVPseLJK4Bp7bdnh5HZMieavvflHP-AHdlGyb6ZKc9IqAqp7VDSk2nv-rGw5jqbfn8Od5X7O88cQVl265EOZbrxUDDa9fap4Gh9btNmhz/s1600-h/computervision.jpg"><img style="cursor:pointer; cursor:hand;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi8CAahOum0rk1F03hckt8TYJlFYpRyVPseLJK4Bp7bdnh5HZMieavvflHP-AHdlGyb6ZKc9IqAqp7VDSk2nv-rGw5jqbfn8Od5X7O88cQVl265EOZbrxUDDa9fap4Gh9btNmhz/s320/computervision.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5137097928164022930" /></a>
<p/>
<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj5r-dxPvkdfalp5i69RsqYCQJIabmWdK2xI93psLktmV42FKfBDc43dRatvgf_dCWiA___sZ3d4WJ345OLk07nkvUFhHVI1b6LEFwXt-ccBo50ouULwausrdgn7RsnmZ3q8mSv/s1600-h/probrob.jpg"><img style="cursor:pointer; cursor:hand;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEj5r-dxPvkdfalp5i69RsqYCQJIabmWdK2xI93psLktmV42FKfBDc43dRatvgf_dCWiA___sZ3d4WJ345OLk07nkvUFhHVI1b6LEFwXt-ccBo50ouULwausrdgn7RsnmZ3q8mSv/s320/probrob.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5137098190157028002" /></a>Ravihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.com6tag:blogger.com,1999:blog-14853042.post-33494625089987814352007-11-16T23:39:00.000+05:302007-11-16T23:54:59.564+05:30Meet "Spineless"<a href="http://magicindian.com/spineless/ravi/">Spineless</a> is a small Django app my friend <a href="http://lawfulsamurai.blogspot.com/">Manoj</a> whipped up to help me keep track of my books. People generally borrow books from me and I have a hard time remembering who took what when. Even after giving away almost all my 300 or so java/j2ee/dotNet/"agile"/"enterprise" books, I still have a few hundred books lying around in Bangalore and more in Trivandrum. I have entered the first 50 or so books into the app (actually I type some keywords and the app looks up the books in Amazon and grabs the details and plonks them into a database). There has been some demand for something like this, so all who asked, enjoy! I'll keep adding books whenever I find the time (which is a commodity in very short supply right now :-( ).
<p/> One of my friends asked me to enter my book list into <a href="http://www.librarything.com/">Library Thing</a>, but I have a visceral dislike of "social networks". Maybe too strong a mental association with Ruby on Rails, I don't know exactly why.
<p/> And before you ask, the name doesn't really refer to books' spines or anything to do with books really. It has to do with a private joke about something someone we know said recently. Manoj liked it so much that he named the app after it. Mention "optimization" to Abey or me and we'll roll around laughing. Mention "Spineless" to me or Manoj and you'll get the same reaction. Commemorating stupid things said by intelligent people (or vice versa) is a bit of a tradition in my circle of friends. What's life without people to make you laugh ?Ravihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.com5tag:blogger.com,1999:blog-14853042.post-45593394899556653842007-11-15T13:29:00.000+05:302007-11-15T13:39:25.818+05:30Fish Eye PlansOne of the emergent effects of my time management meta focus is that I have I call a "fish eye" plan.
<p/>
I know in great detail what I will be working on in the immediate future (say the next few days). Next week, I know, but in less detail.I know what I'll be working on next month, but at a higher level of abstraction. Next quarter, even more abstract, but I *do* know what I'll be doing. So if you ask me what I'll be doing in (say) March 2008, I'll have an answer. Not many people can answer that question. This is a very refreshing change.
<p/>This is not to say that plans won't change. They will, but you know what they say about failing to plan.
<p/>What pleases me the most is that I didn't set out to do this. I just followed Dr Randy Pausch's "system" of time management (with many tweaks and some significant additions from elsewhere) and it all just fell out.
<p/>
Life is good.
<p/>Question: Is anyone in Bangalore (anywhere in India will do at a pinch) doing significant programming in Haskell? Mail me please. Thanks in advance.Ravihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.com3tag:blogger.com,1999:blog-14853042.post-75403797910109687092007-11-14T12:39:00.000+05:302007-11-14T15:08:43.816+05:30I won't go seal hunting with youI told someone a few minutes ago "I think you are very capable and like you a lot but I don't trust you one inch". (Yeah I do say things like that in real life :-D).
<p/>After the (very unproductive) conversation was over, I was trying to recollect where that phrase/concept came from. It was certainly not an original thought/phrase and I was sure I read it somewhere. Turns out it was from Edward de Bono who said (paraphrased) "some languages have one word which says 'I like you very much, but I would not go seal-hunting with you'." so one can say (in that language) "I [word] you" where [word] has the meaning of the phrase above.
<p/> I forget the book he used the phrase in but iirc, the context was that sometimes if we lack the precise word for something, we need to repeat a long phrase.
<p/>This holds in technical conversation too. We don't say "change the structure of the code without changing the behaviour", we say "refactor". In the absence of the word "refactor" we would probably use the longer phrase, till someone got tired of it and invented the word.
<p/> Back to seal hunting. In this case, the context was one of doing a startup together and I suggested the phrase as an explanation why a few people don't "openly talk" to him, while the other people do talk to me "openly". Note that this doesn't mean that person X *is* untrustworthy. All it means is that I (and a few other people) *think* that in a crunch, X wouldn't stand by us and would do what's best for him vs what's best for us.
<p/> Is not trusting someone a problem? In a normal context, not at all. We all work with people whose competence we admire but we wouldn't want to trust them to keep our objectives and safety in mind when the stakes are high. We just add a double dose of caution, keep our guards up and are careful with words around that person, read the fine print very carefully and so on.
<p/> I am not very sure that would work in the context of a startup though. In a startup (from what I've read, not that I've ever founded a startup, though I've been employed in a couple) the pressure is so intense that one doesn't have the time to do all this armor donning and being careful. Founding startups is not like seal hunting -it is like hunting Great White sharks or the aliens from the movie "Predator".The founders absolutely *must* trust each other to do the right thing when the bullets (or harpoons or laser beams) are raining down.
<p/>Another thing that I noticed from the conversation was the use of phrases which don't really mean anything but are invitations to games (in an Eric Bernian, Transactional Analysis sense - the person proferring the invitation doesn't often know what he is doing, from a group dynamics perspective).
<p/>For e.g the conversation started with "I don't want to jeopardize our friendship but .. " and I was thinking "What friendship? we aren't friends - we were never friends- We are professional acquaintances who speak about technology occasionally and have met at a couple of parties".
<p/>To me a friend is someone who you have let into the "inner courtyards" of your life. You can have many positive, enriching relationships which are of a lesser intensity than a friendship, but the word does have a defined meaning and it didn't make sense in this context.
<p/> That opening phrase puzzled me for a while, but then I realized that he didn't really mean it either. It is just a phrase the meaning of which depends very much on the context of its usage, in this particular instance meaning something like "I am very uneasy about the fact that people don't relate to me like they relate to you. I am going to vent my feelings a bit but I don't know how you'll react to that, so I'll just use this phrase and hope you believe " maintain friendship" is the reason I am screaming".
<p/> Real friends don't <em>need</em> to use those kind of weasel phrases and would say something like "I don't know wtf you are doing, but I am concerned" or something to that effect. I try to avoid these meaningless phrases as much as possible, not because it is a particularly noble thing to do, but because I don't want to clutter a conversation with superfluities. "Say what you mean, mean what you say" is a surprisingly effective, not to mention efficient means of communication.
<p/> If you keep an open mind you learn something from the most trivial incident. I learned a few things today. Not too bad for a 20 minute conversation. Not bad at all.Ravihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.com7tag:blogger.com,1999:blog-14853042.post-1699562058968288092007-11-13T19:46:00.000+05:302007-11-13T21:56:11.815+05:30A list of topicsMy focus on Time Management has affected how I write on this blog. Earlier, when I felt strongly about something or when I was feeling bored, I'd just dash something off. This wreaks havoc with schedules and "getting things done",and I have often stopped writing for a few weeks, months whatever. But this is dissatisfying in its own way.
<p/>
Now, when I think of something that I feel like writing about, I just add it to a list (on a wiki, if I am at a computer, in a small notebook if I am not) and carry on with what I am doing. When a "write a blog post" size chunk of free time comes around, 15-20 minutes, typically) and I am working on the comp, I just pick an item off the list and write about it.
<p/>The style of writing is different than it used to me because I am writing against a timer. More direct, less textured, which is probably the appropriate style for a blog post. The longer wait between conception and recording also allows the good ideas to percolate and the bad ones to show their unworthiness.
<p/>For anyone who is curious, my present topic list (this is a continuously mutating entity) looks like this.
<ol>
<li>San Fransisco vs Bangalore - the right place for software startups?</li>
<li>The Idea Bottleneck and how it really isn't [DONE] </li>
<li>Stealing a leaf from Paul Graham.</li>
<li>SICP Stumbling Blocks</li>
<li>A Billion Dollars And Then?</li>
<li>Topic List on a Wiki [DONE] </li>
<li>Practical SICP - How SICP helps in real life projects</li>
<li>Derivative Trading as Programming Pedagogy</li>
<li> Growing into a (technical) book</li>
<li> Memorizing Shakespeare (and Shaw) </li>
</ol>
This is just a record of things I <em>might</em> write about. But the time management meta focus seems to be paying off already. I am writing more frequently than usual, and not feeling stressed at all.Ravihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.com4tag:blogger.com,1999:blog-14853042.post-49447894783352969842007-11-12T22:11:00.000+05:302007-11-13T18:14:54.062+05:30Someone stole my cell phoneso if you are reading this, and are on my phone list, please send me your phone numbers to repopulate the address book on my new cell phone.
<p/>Thanks in advance.
<p/> Update - I got a new sim card but decided not to get a new phone till I complete my research paper.Meanwhile, please use email.Ravihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.com0tag:blogger.com,1999:blog-14853042.post-91361297897374038772007-11-12T18:41:00.000+05:302007-11-13T08:01:39.864+05:30The Idea Bottleneck That Isn'tOne of my friends was talking about doing a startup in Bangalore. This is a conversation that comes up regularly. I tried to get a startup off the ground twice, but failed because the people who I wanted to work with didn't live in Bangalore and I didn't want to move.
<p/> Those attempts isn't what this post is about. Because this friend was talking about a startup, and wanted some people to work with him , I spent some time talking to other friends who might be interested (I am well connected to interesting, technically capable people who are hungry for success and money).
<p/> One question I keep hearing is "What about ideas? Do you have an idea?". I have never understood this question. Generating decent ideas has never been a problem for me. I could generate 20 workable ideas in 20 minutes. And repeat that exercise a few times before I need to pause.
<p/> The first time I encountered this "idea bottleneck" concept I didn't pay it much heed. But then someone I respect said I was not paying enough attention to "finding an idea" if we were to attempt that hypothetical startup. So I reached into my mental pile of ideas and gave him an idea to work with, while we work through other issues. He seems happy about it and sent me a link showing that a company with a similar idea (actually a trivial subset) had been acquired for 50 million $ cash.
<p/> Another friend heard of this and said "Dude, how can you give up your idea like that? What if he rips you off? Be careful"? First, the friend I told the idea to is a decent chap. Second, I don't really care if he (or anyone else for that matter) "rips me off".
<p/>
My operating concept is that having ideas per se isn't all that important. It is the actual execution that matters and so I am not afraid of people "stealing" my ideas. It isn't as easy to do that as many people think. The idea I suggested to my friend, for example, sounds simple but there are all sorts of interlocking little problems that all have to be resolved perfectly for it to be successful. I can imagine someone spending a few months (and a few hundred thousand dollars) in an attempt to get it to work and not getting anywhere, because he isn't aware of what to tweak to get a particular effect or how to fix a subtle "error". In fact if anyone can execute the idea better, faster or more comprehensively than I can he is welcome to it.
<p/>Paul Graham says it better than I can (in news.ycombinator).
<p/>
.... if we told everyone about everyone else's ideas, it wouldn't be as bad as you might think. Secrecy is not as important as beginning founders think, because (a) ideas are less valuable than they think, and (b) the most common form of death for startups is suicide, not being killed by competitors."
<p/>The last sentence says it all.
<p/>
Update: One hour after this was written, the proposed startup project committed suicide :-D.
<p/> R.I.P :-DRavihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.com5tag:blogger.com,1999:blog-14853042.post-30092370200100689462007-11-09T11:50:00.000+05:302007-11-09T13:21:44.587+05:30Experiments with Time ManagementAfter viewing Randy Pausch's <a href="http://video.google.com/videoplay?docid=2750363533451832628">lecture on Time Management</a>, I've been experimenting with various time management practices. (yeah I know how management-ey and buzzword-ey that sounds, sorry). It helps that I am going through <em><u>the</u></em> busiest month in my entire life, so I am more motivated than usual to make this work.
<p/>The most successful habit I've adopted is to put a money value on my time. As Randy says in the first few minutes of his talk, people are generally willing to give you their time, but not their money. If you ask people to spend a quarter of an hour to help you arrange the furniture for a talk, they will happily do so, but they won't hand over 10 $ for no obvious reason.
<p/> Once you start putting a dollar value on your time, though, strange things happen. For example, I decided not to attend Barcamp. (I'd registered, I cancelled it). Say an hour of my time is worth say a measly 25 $ (being a cheap outsourced-to Indian programmer). Travelling to and from Barcamp, and attending 3 or 4 sessions would take, at a minimum 6 hours. 6 * 25 = 150 $.
<p/>
Would I spend 150 $ to attend Barcamp? Not on your life! To be honest, there are other disincentives also - I really don't want to attend a meet which is so "implementation lite" - where there is so large a focus on things like search engine optimization (!) and blogging.
<p/>
People actually think blogging is a big enough deal to form "collectives" about blogging? - there is even a "collective" for "radical ideas" -- yeah right, that makes sense .. NOT. The mind boggles. I consider blogging the intellectual equivalent of writing a post-it note. I cannot imagine self-identifying as a "blogger" and making a big deal out of it. God have Mercy.
<p/> Back to Time Management. Since I don't (in general) attend phone calls from unknown numbers/people when I am working, I don't have interruption issues. And once I concentrate on something, it is hard even for people (even if they are sitting right next to me) to jerk me out of the zone - I don't hear anything when I focus on work so they'd have to nudge me, and risk getting a laptop thrown at their heads) so I didn't benefit so much from that part of his talk. But the money value of time is an excellent perception to have. Would I spend 50 $ to have a good conversation with a friend/someone doing interesting things? definitely.
<p/> He is also right about keeping a time journal. The result of such an experiment is the most horrifying document you will ever see in your life. The good news is that with such an abysmal base, you can make substantial improvements in a very short order.
<p/> To round things off, a couple of snippets from a <a href="http://philip.greenspun.com/materialism/early-retirement/">Phil Greenspun article</a>.
<p/><i>Ask a wage slave what he'd like to accomplish. Chances are the response will be something like "I'd start every day at the gym and work out for two hours until I was as buff as Brad Pitt. Then I'd practice the piano for three hours. I'd become fluent in Mandarin so that I could be prepared to understand the largest transformation of our time. I'd really learn how to handle a polo pony. I'd learn to fly a helicopter. I'd finish the screenplay that I've been writing and direct a production of it in HDTV."
<p/>
Why hasn't he accomplished all of those things? "Because I'm chained to this desk 50 hours per week at this horrible [insurance|programming|government|administrative|whatever] job.
<p/>
So he has no doubt that he would get all these things done if he didn't have to work? "Absolutely none. If I didn't have the job, I would be out there living the dream."
<p/>
Suppose that the guy cashes in his investments and does retire. What do we find? He is waking up at 9:30 am, surfing the Web, sorting out the cable TV bill, watching DVDs, talking about going to the gym, eating Doritos, and maybe accomplishing one of his stated goals.
<p/>
Retirement forces you to stop thinking that it is your job that holds you back. For most people the depressing truth is that they aren't that organized, disciplined, or motivated. </i>
<p/>
This is so true it is not funny. And one doesn't have to retire to encounter this phenomenon.I know many people who drop out of the 9-5 rat race for various reasons ( startups, research , search for the meaning of life .. whatever), and then find that months and years pass with nothing substantial being achieved.
<p/> Another interesting suggestion from the same article.
<p/><i>publish a public Web diary of what you do every day, thus discouraging you from wasting time because you'll be ashamed to admit that all you accomplished yesterday was a 15-minute oil change and a trip to Target</i>
<p/> That is a very interesting idea, but a little forbidding, exposing one's lack of productivity to the world. But I should probably grit my teeth and try it for a month or so. If you want to see what this would look like for someone who is really really good at what he does (vs humbler mortals like you and me) see <a href="http://en.wikipedia.org/wiki/John_Carmack">John Carmack</a>'s <a href="http://www.scribd.com/doc/14192/John-Carmack-Archive-plan-1998">dot plan files</a> (warning - Flash, sorry I couldn't find an html document of these and I am running out of my "write a blog entry" time slot). There's even a theory that these (Carmacks' plan files) were the origins of the present day blogs, no matter what Dave Winer says.
<p/>While I don't quite have the guts to "dot plan" my puny daily efforts, I could probably list down what I plan to do each month on the first day of the month and a report on the last day of the month detailing what actually got done.
<p/> So, as an experiment, here is what I need to do this month (November 2007) (client work excluded - privacy concerns blah).
<p/>
<ol>
<li>Research - Solve the last 2 issues, code, write up research results into a paper and send out for review. Hard deadline of 24th Nov</li>
<li>Write a Reinforcement Learning library for the friendly folks at at the DRDO. This is a <em>a lot</em> of work</li>
<li>Implement the algorithms in chapters 9-12 of <a href="http://aima.cs.berkeley.edu/">AIMA</a>. (First Order Logic and planning, for anyone working through the book). Release the next version of <a href="http://code.google.com/p/aima-java/">the AIMA Java code</a>. </li>
</ol>
Even with the newly adopted time management practices, that is a tonne of work which brings me to -
<p/> Blogging Time Over. Back to Work.Ravihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.com13tag:blogger.com,1999:blog-14853042.post-78055598340266063052007-11-06T14:22:00.000+05:302007-11-06T14:41:08.068+05:30Hackers Month of Learning Haskell aka X Y MonthThe local newspapers are all going crazy about <a href="http://www.nanowrimo.org/">National Novel Writing Month</a>. Apparently NaNoWriMo is some kind of cultural phenomenon in which random people commit to writing 50,000 words in a month, hopefully finishing a (substantial part of a ) novel.
<p/>I am not sure any worthwhile novels actually get written this way. If you won't do it anway, you probably won't do it just because other people are also doing it but the idea has its possibilities. The "national" doesn't make sense in a networked world, so we'll replace it with the variable X.
<p/>Instead of novel writing we could do something more fun so we'll call it Y.
<p/> Let X = { .bunch of adjectives...} Let Y = { ..bunch of verbs.. }.
<p/>so we could have "Personal Month Of Derivative Trading" or whatever.
<p/> From the nanorimo website,
<p/><i>Valuing enthusiasm and perseverance over painstaking craft, NaNoWriMo is a novel-writing program for everyone who has thought fleetingly about writing a novel but has been scared away by the time and effort involved.</i>
<p/>I can certainly think of analogues in programming. many people want to learn Haskell, (say) but never get around to doing it because at any given point of time, there are more important things to do. maybe someone should declare December to be "Hackers Learn Haskell" month and ask anyone participating to write, say, 500 functions in Haskell in a month. Or work through SICP completely. Or whatever.
<p/>from the nonwrimo site, <i>Because of the limited writing window, the ONLY thing that matters in NaNoWriMo is output. It's all about quantity, not quality. The kamikaze approach forces you to lower your expectations, take risks, and write on the fly.
<p/>
Make no mistake: You will be writing a lot of crap. And that's a good thing. By forcing yourself to write so intensely, you are giving yourself permission to make mistakes. To forgo the endless tweaking and editing and just create. To build without tearing down.</i>.
<p/> Programmers write a lot of crap code anyway so at least that part won't be new.Ravihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.com0tag:blogger.com,1999:blog-14853042.post-30349766984285260992007-11-04T21:11:00.000+05:302007-11-04T21:49:01.790+05:30The Algorithm Quotient of a Software ProjectStill thinking of the DARPA Car Races, it strikes me that a project's dependence on algorithmic innovation for a successful outcome is in direct proportion to its "interestingness" froma developers point of view.
<p/>
Think about it. All the cars that autonomously traverse 160 miles of desert or travel through an urban environment while obeying traffic rules have essentially the same hardware. The difference in performance is about how algorithmically sophisticated the *software* of each car is.
<p/> On the other end of the spectrum most "enterprise" projects are algorithmically trivial (I said "most" so if anyone's ego out there is tied to his knowledge of JSP or whatever, spare me the righteous indignation) and the most complex data structure used is often a hashtable.
<p/> I don't think this is always a valid heuristic, but it might be valuable to rank projects on the scale of the algorithmic complexity involved in each (which in turn probably depends on how close the project comes to exploring the unknown, as the DARPA race efforts do), and then by how compelling they are and see if there is a correlation.
<p/>
In my experience, while enterprise projects have their own set of problems to be solved, they are rarely algorithmic in nature (vs economic or scheduling issues for e.g).
<p/>Another supporting factor might be to see if interviews for the interesting dev jobs focus heavily on testing algorithmic knowledge. At first glance this seems to hold as well. If you are interviewing for Google Research, you probably need to be great at algorithms. If you are interviewing for Wipro, not so much - a knowledge of the contemporary buzzwords (agile, xp, tdd, ruby dsl blah) should get you through.Ravihttp://www.blogger.com/profile/03630087669712445498noreply@blogger.com13