I had the privilege of seeing a lecture by Dr. Robert Laughlin (1998 Nobel Prize winner in Physics) at the Adler Planetarium. “[Dr. Laughlin argued] that the true frontier of physics may not lie in studying tiny individual particles, but rather by studying the properties that emerge when large collections of particles are taken together as a whole.” This notion of “emergence” flies in the face of centuries of reductionism. (This is also an interesting reductionism link.)
Dr. Laughlin demonstrated that some (all?) sufficiently complex systems show emergent or collective behaviours — behaviors that are greater than the sum of their parts. What’s facinating about these behaviors is that they do not follow reduction-based models. Take for example a rigid aluminimum bar. What causes its rigidity? If you reduce the problem down to a nano scale, say, a single layer of aluminimum atoms, you find that the system is no longer rigid; it actually demonstrates fluid behaviors. As you increase the number of atoms the system begins to display more and more of the rigid behaviors. There is no single point at which, like a light switch, rigidity can be on or off.
Another interesting example is classical (Newtonian) mechanics. As you attempt to reduce the problem of mechanics down father and father you enter the realm of quantum mechanics where probabilities rule. There is no point at which the transition from one model to the other takes place. Classical mechanics emerges from quantum mechanics when a large set of particles is observed. If there is a 100M ton freight train coming straight at you, you don’t use the probabilites of quantum mechanics to compute whether or not the train is going to hit you — you’re going to get hit!
When listening to this lecture my mind began to spin and the wheels began to turn. Coming from a physics background I am naturally a reductionist — I tend to believe that every problem can be reduced down to first principles. This belief has bled over into my software engineering work. I began to wonder: what if the significantly complex interactions of software in modern systems begin to display emergent behavior? How could we possibly begin to systematically analyze and understand the nature of this behavior?
It seems that in general modern software engineering has a reductionist approach. Most agile methodologies, for example, advocate unit testing. Unit tests are nothing more than a firm belief in and practice of reductionism. Even without emergent behaviors most software engineers know that the transition from unit testing to integration and system testing is non-trivial. Simply understanding the interaction of various software components and understanding the failure modes that are introduced is a hard and not well understood problem.
So then what if non-reducible behaviors do occur? Are there going to be cases where we throw our propositional logic and lambda calculus out the window in favor of representations that model the emergent behavior? Interesting times are ahead!