The point is still valid though. In his latest article, I find he has articulated in simple, crystal clear sentences, ideas I have been wrestling with in wretched futility for months.

Here is a simple example. When I went back to Thoughtworks for a visit recently, someone asked me "You have been doing some interesting ( == non j2ee, non enterprise) work for a year now So what did you learn?" And I said (among other things) "Math is important.". But then the thoughtworker asked "But isn't Math just another domain for the "analyst" to master? Any domain expert who deals with a domain will know what exactly he wants calculated and how. Why should we developers delve into the underlying theory? In other words isn't Math just a domain? "

I instantly saw that there were deep chasms dividing the world of enterprise software from the kind of code I write these days, but I couldn't find the words to bridge those chasms. I mumbled weakly that "Understanding math somehow makes your thinking better". Neither the questioner nor I were satisfied.

Now see how elegantly Paul expresses it(emphases mine). After saying that letting your mind wander is often a good source of ideas, he begins to wonder why that is so.

*"....What happens when your mind wanders? It may be like doodling. Most people have characteristic ways of doodling.....Perhaps letting your mind wander is like doodling with ideas. You have certain mental gestures you've learned in your work, and when you're not paying attention, you keep making these same gestures, but somewhat randomly.In effect, you call the same functions on random arguments. That's what a metaphor is: a function applied to an argument of the wrong type....The habits of mind you invoke on some field don't have to be derived from working in that field. In fact, it's often better if they're not ...
Are some kinds of work better sources of habits of mind than others? I suspect harder fields may be better sources, because to attack hard problems you need powerful solvents. I find math is a good source of metaphors good enough that it's worth studying just for that."*

And there is what I should have said when people asked what good Math was. Math gives you more "primitives" to operate with and more ways of combining them. So do programming languages like Erlang and Lisp (which I was forced to turn to when the problems I was working on got too hard to handle with Java or Ruby). Math (and some programming langauges) are indeed "powerful solvents" that give you more ways of perceiving a problem, more choices on how to deal with those problems and overall a much richer field of possibilities.

Duh! I had to wait for Paul Graham to write that up and when I read the above quoted sentences I was shouting at myself "Exactly!! *That* is what I wanted to say"

Here is an ultra simple example of what I experienced. I needed to build a fairly complicated Neural Network with avery abstruse Training Scheme. For a long time I used the habits of thought I had picked up in years of "Object Oriented Thinking" and thought of a neural network as an "object having properties X and Y , with behaviours A and B , consisting of n Layer objects, each having properties blah and behaviour foo.. A Training Method is a Strategy Object that decides ..." and so on.

The people who I was talking to were primarily mathematicians and scientists and soon a communications gap yawned and we were all getting frustrated. Now I could have insisted they grok Objects or I could just learn the underlying math. I chose the latter and gritted my teeth, clenched my muscles and jumped into the Linear Algebra and Calculus needed to understand neural networks and .... found it quite interesting. Nothing like the nonsense poured down our throats in college. I was now using the best books, written by the most talented folks in te field, to learn, and I was applying the knowledge I gained to solve a very tough problem. And soon I saw that understanding Vector Spaces and Differential Equations allowed me to * see*
neural networks in a new way. As I told a friend sometime later, a Neural Network

*a set of Equations and every training scheme solves a problem in Topology. There is no way I could have come to that understanding without harnessing the underlying math. No amount of "Domain Modelling" or "Extreme Programming" or whatever would have given me the understanding needed to tackle the problem.*

__is__And that is why today when some people look at the work I do and say "dude , I don't want to study all this math stuff. Too dry for me. Lisp is too hard as well, leave alone all this concurrency stuff. But I want to learn AI too.I will just use TDD and Java", I just smile to myself. Unlike most "enterprise" work, in production strength AI (and most other "tough" fields I would imagine), not understanding the underlying domain deeply enough to acquire an intuition for what will work and what will not, just wastes time. Without the new perceptions opened up by grokking the "powerful solvents"(and it will take about a year of work to get to that point), one is just as blind as ever.

And it needed Paul Graham to make it all clear in about 5 sentences.

I have also come to see that the Analyst/Developer split is very dangerous, even in 'Enterprise' software and leads to substandard software. But that is a topic for another post.

## 2 comments:

Great article by Graham. Thanks for the link.

Contrary to what I thought at first, I will have to stick with your developer friend at Thoughtworks: that Math is still a domain that needs to be mastered. It might give you metaphors to think in, but those metaphors won't help you write better enterprise code; they might help you write better neural networks maybe, or maybe even better weather forecasters. But for domains that don't need math, you just don't need math. I mean advanced math like Calculus, Topology, Probabilistic Analysis, etc. For someone who is building a UI, Calculus is really not needed. Now, whether knowing Calculus makes one a better thinker overall - thats a different question.

The main point I am trying to make here is that Comptuer Science is different from Mathematics. I will elaborate on that at an abstract level after commenting on something else you have written which I don't agree with.

a Neural Network is a set of Equations and every training scheme solves a problem in Topology. There is no way I could have come to that understanding without harnessing the underlying math.

So, after figuring out the underlying equations and the training routine, you will have to code it up right? I mean, there are nodes and layers in the network, and each of them indeed have certain behaviors. Whether you want to make the node or layer an object, or a training sequence an object, or anything else an object, is what your classic OO designing problem is. Maybe OO doesn't fit here. Maybe you just need simple data structures with C-like functions. The fact that you have to code it up eventually to make a working program should tell you that you have to design it in one of the programming paradigms, and the choice you make should tell you how good you are at your domain (the math underlying neural networks) and software design (knowing the pros and cons of various programming paradigms).

Comptuer Science borrows heavily from Math, but is not the same. I am not qualified enough to elaborate the differences, but a simple attempt would be to think of NP-completeness, Approximation Algorithms, Algorithms themselves, data-structures, layered-architectures, parallelization, etc. These concepts are very computer sciene. Their analysis requires Math.

I realize that I am totally lost now. Writing is no easy business anyway. But here is my last ditch attempt at an overall point.

You had written earlier that you wanted to write "cooler" software, like the ones that drive robots, predict weather, or something on those lines. I think anything that is "fundamental" in nature, and involves some degree of sophistication, will need math.

Sorry about this. Maybe I will think it through and write it on my own blog. Great food for thought though.

Teju,

I (think I) agree with most of what you say.

1."Math is just a domain" I don't know what you understood from this. But my view (now) is that Math is a kind of meta domain. It most certainly is not "another domain" in the TW sense (e.g. "Leasing is a domain which a developer does not need to know")

2.I never said one needs math for "enterprise" code. Where did that come from?

3."Computer Science is different from Mathematics". Well Yeah. I never said otherwise. Depending on how you define "COmp Sci" .Some branches of Math (think Discrete Math) are very closesly tied up with "COmp Sci". Calculus probably isn't. SInce I never said one was teh other, I don't see where this came from?

4."So, after figuring out the underlying equations and the training routine, you will have to code it up right? I mean, there are nodes and layers in the network, and each of them indeed have certain behaviors."

Not really. And this is what I was trying to say. I ended up "modelling" the network as a series of matrices and matrix operations (distributed on various machines etc).

The "object way" of having a class Network, class Layer etc is not the only way. And that was exactly the point - If you aren't aware of the math underlying what you are doing (assuming you are working in one of these "tough" fields") you might end up modelling the "wrong" concepts. The objection is not to using objects vs (say) structures and functions.

It is about the "enterprise" habit of modelling what you see on the surface *and assuming that is enough*.

Unless you know what the heck is happening (here, a topology function and Linear Equation solution) you end up modelling the "surface" of the problem.

In other words, what is "happening underneath" in much of AI work is mathematical.And you need to be able to perceive that.You can't without learning/grokking the math. End point. :-)

Again I am not very sure we really disagree.

?

Post a Comment