Ravi Mohan's Blog
Friday, December 28, 2007
from this site, ....managers are voracious consumers of theory. In other words, they value ways to think about their world, and mental tools that will let them make decisions and predictions with a level of confidence higher than they get from experience and intuition alone. Ummmm. Ok. :-P
Sunday, December 16, 2007
I've just taken up an assignment to mentor one of my friends in maths/proof/logic. My "student" has no background in maths or logic (if you ignore the negative knowledge gained by sitting through a bachelors degree in Engineering in India's f***ed up educational system), but is a good programmer. The expected post condition is that my "student" will be able to read and work out mathematical proofs, design an experiment, analyze an existing research paper, work out an algorithm for a given task, understand basic calculus and linear algebra and in general gain the ability to conduct independent research in Machine Vision. I (and some others, like my friend Rajesh, for e,g.) have worked out some of these "how to"s over the last few years, mostly by brute force search of the landscape, discovering pitfalls by falling into them (and making a careful map, which it is now time to pass on). "How to work out a proof" for example is rarely taught (well) in a classroom setting and I have had to synthesize knowledge from many sources to come up with a workable technique. I can save my "student" hundreds of hours of work. Teaching is how one crystallizes one's knowledge. I've always enjoyed teaching but what I've done earlier was in a many to one to many format, involving speaking to a group of people in using a blackboard to derive a proof or a projector to display code. The one to one format will be a new experience. I spent a few hours today working out a "syllabus" and it looks very good, even if I do say so myself :-) Update : I plan to teach first order logic and proofs first, probably using books by Velleman, Polya, and Doets and Van Ejck.
Thursday, December 13, 2007
One of my friends recently told me (I have interesting friends :-)) "Every time I meet you, you have moved further ahead in [area of focus], so how do I catch up with you?" The answer == "You can't, unless (a) I stand still and wait for you to catch up - Why should I? (b) You figure out how to move a few hundred percent faster than me (Good luck with that)" Imo, the right thing to do with life is not waste cycles "catching up" with other people. I'd rather do something that is uniquely mine. Competing with oneself is hard enough without wasting bandwidth on what other people are doing or how they are "drawing ahead" or whatever.
I was speaking to one of my friends doing research in robotics in a major American university when it struck me that most of the researchers in robotics and machine learning are gentle pacifists who wouldn't want to hurt a fly, while one of the greatest uses of robots is going to be in real battlefields. And I am not talking about robots engaging in non lethal activities like scouting or mine clearing, though that's how robots will initially appear on the battlefield. I am talking of robots (on land ,sea and air) armed with guns and other weaponry and increasingly deciding for themselves when to fire and what to fire at. I believe that in wars of the future two major features will be asymmetrical war (what we call "insurgency" -blurring the boundaries between "civilians" and "soldiers") and increased participation by increasingly sophisticated autonomous robots/remote control weapons. (see this column for some thinking along these lines). I'll ignore the first aspect and focus on the second, in the interests of keeping the size of this post manageable. Here is how robotics will develop in war. First, robots will engage in non lethal activities like mine clearing or I.E.D detection. (This is happening today) Then you'll see them accompany human combat units as augmenters and enablers on real battle fields. (This is beginning to happen) As robotics gets more and more sophisticated, they will take up potentially lethal but non combat operations like patrolling camp perimeters or no fly areas, and open fire only when "provoked" (This is beginning to happen too). The final stage will be when robotic weapons are an integral part of the battlefield, just like "normal", human controlled machines are today and make autonomous or near autonomous combat decisions. These robots will look nothing like the vaguely humanoid robots of Star Wars - they will have shapes suited to their roles - think tiny dragon fly / cockroach like systems to computerized tanks and helicopters. And this type of robotic battle system will not be confined to the major superpowers. As of today, what gives robots an edge is not really dependent on hardware, but on sophisticated software, something that does not depend on superpower status. Someone in Korea or India or China, or even a semi military corporation can gain a temporary edge in robotics software comparatively easily (though the Red Queen Effect will kick in soon enough to close the gaps) sufficient to tip a battle or a war one way or another. The funny thing is that quite a bit of the required technology is available, sometimes in embryonic form, today. "The future is here, just not widely distributed" - as William Gibson said. I know that the Indian defense forces have made substantial progress in battle field robotics systems (I can't be more specific - the details are classified so don't ask! :-D ), and what India is on the verge of doing, others are, too. None of this is to claim that future combat will occur between machines, all clinical and detached with no blood being spilled. On the contrary. Historically, every innovation in technology has been hailed as bringing in an era of bloodless warfare. When the airplane was invented, for example, many columns were written on how they made war obsolete because, obviously, with planes roaming the sky no army could make unobserved movements and so war was now futile! Dresden and the Blitz and Hiroshima were just around the corner. My prediction is that war will become more lethal, with increasingly sophisticated autonomous weapons systems and ever more vicious asymmetrical warfare twisting the business of making "the other fellow die for his country" (or ideology or religion or ...) into newer and ever more horrific forms. And that is something that all of us working on robotics or machine learning should think about. Update: First Armed Robots deployed in Iraq . While these are not really autonomous systems (they are remote controlled) and substantial technical challenges remain on the path to truly autonomous systems, people conducting research in robotics know how fast these barriers are falling. Those who are not researchers in robotics/ai may mull over the progress in autonomous navigation of vehicles from 2004 to 2007 as exemplified by the DARPA Grand Challenge. Now extend that rate of progress to areas other than navigation.