Category Archives: Philosophy

Philosophy is never too far from physics. It is in their overlap that I expect breakthroughs.

Brains and Computers

We have a perfect parallel between brains and computers. We can easily think of the brain as the hardware and mind or consciousness as the software or the operating system. We would be wrong, according to many philosophers, but I still think of it that way. Let me outline the compelling similarities (according to me) before getting into the philosophical difficulties involved.

A lot of what we know of the workings of the brain comes from lesion studies. We know, for instances, that features like color vision, face and object recognition, motion detection, language production and understanding are all controlled by specialized areas of the brain. We know this by studying people who have suffered localized brain damage. These functional features of the brain are remarkably similar to computer hardware units specialized in graphics, sound, video capture etc.

The similarity is even more striking when we consider that the brain can compensate for the damage to a specialized area by what looks like software simulation. For instance, the patient who lost the ability to detect motion (a condition normal people would have a hard time appreciating or identifying with) could still infer that an object was in motion by comparing successive snapshots of it in her mind. The patient with no ability to tell faces apart could, at times, deduce that the person walking toward him at a pre-arranged spot at the right time was probably his wife. Such instances give us the following attractive picture of the brain.
Brain → Computer hardware
Consciousness → Operating System
Mental functions → Programs
It looks like a logical and compelling picture to me.

This seductive picture, however, is far too simplistic at best; or utterly wrong at worst. The basic, philosophical problem with it is that the brain itself is a representation drawn on the canvas of consciousness and the mind (which are again cognitive constructs). This abysmal infinite regression is impossible to crawl out of. But even when we ignore this philosophical hurdle, and ask ourselves whether brains could be computers, we have big problems. What exactly are we asking? Could our brains be computer hardware and minds be software running on them? Before asking such questions, we have to ask parallel questions: Could computers have consciousness and intelligence? Could they have minds? If they had minds, how would we know?

Even more fundamentally, how do you know whether other people have minds? This is the so-called Problem of Other Minds, which we will discuss in the next post before proceeding to consider computing and consciousness.

Pride and Pretention

What has been of intense personal satisfaction for me was my “discovery” related to GRBs and radio sources alluded to earlier. Strangely, it is also the origin of most of things that I’m not proud of. You see, when you feel that you have found the purpose of your life, it is great. When you feel that you have achieved the purpose, it is greater still. But then comes the question — now what? Life in some sense ends with the perceived attainment of the professed goals. A life without goals is a clearly a life without much motivation. It is a journey past its destination. As many before me have discovered, it is the journey toward an unknown destination that drives us. The journey’s end, the arrival, is troublesome, because it is death. With the honest conviction of this attainment of the goals then comes the disturbing feeling that life is over. Now there are only rituals left to perform. As a deep-seated, ingrained notion, this conviction of mine has led to personality traits that I regret. It has led to a level of detachment in everyday situations where detachment was perhaps not warranted, and a certain recklessness in choices where a more mature consideration was perhaps indicated.

The recklessness led to many strange career choices. In fact, I feel as though I lived many different lives in my time. In most roles I attempted, I managed to move near the top of the field. As an undergrad, I got into the most prestigious university in India. As a scientist later on, I worked with the best at that Mecca of physics, CERN. As a writer, I had the rare privilege of invited book commissions and regular column requests. During my short foray into quantitative finance, I am quite happy with my sojourn in banking, despite my ethical misgivings about it. Even as a blogger and a hobby programmer, I had quite a bit success. Now, as the hour to bow out draws near, I feel as though I have been an actor who had the good fortune of landing several successful roles. As though the successes belonged to the characters, and my own contribution was a modicum of acting talent. I guess that detachment comes of trying too many things. Or is it just the grumbling restlessness in my soul?

Pursuit of Knowledge

What I would like to believe my goal in life to be is the pursuit of knowledge, which is, no doubt, a noble goal to have. It may be only my vanity, but I honestly believe that it was really my goal and purpose. But by itself, the pursuit of knowledge is a useless goal. One could render it useful, for instance, by applying it — to make money, in the final analysis. Or by spreading it, teaching it, which is also a noble calling. But to what end? So that others may apply it, spread it and teach it? In that simple infinite regression lies the futility of all noble pursuits in life.

Futile as it may be, what is infinitely more noble, in my opinion, is to add to the body of our collective knowledge. On that count, I am satisfied with my life’s work. I figured out how certain astrophysical phenomena (like gamma ray bursts and radio jets) work. And I honestly believe that it is new knowledge, and there was an instant a few years ago when I felt if I died then, I would die a happy man for I had achieved my purpose. Liberating as this feeling was, now I wonder — is it enough to add a small bit of knowledge to the stuff we know with a little post-it note saying, “Take it or leave it”? Should I also ensure that whatever I think I found gets accepted and officially “added”? This is indeed a hard question. To want to be officially accepted is also a call for validation and glory. We don’t want any of that, do we? Then again, if the knowledge just dies with me, what is the point? Hard question indeed.

Speaking of goals in life reminds me of this story of a wise man and his brooding friend. The wise man asks, “Why are you so glum? What is it that you want?”
The friend says, “I wish I had a million bucks. That’s what I want.”
“Okay, why do you want a million bucks?”
“Well, then I could buy a nice house.”
“So it is a nice house that you want, not a million bucks. Why do you want that?”
“Then I could invite my friends, and have a nice time with them and family.”
“So you want to have a nice time with your friends and family. Not really a nice house. Why is that?”

Such why questions will soon yield happiness as the final answer, and the ultimate goal, a point at which no wise man can ask, “Why do you want to be happy?”

I do ask that question, at times, but I have to say that the pursuit of happiness (or happyness) does sound like a good candidate for the ultimate goal in life.

Summing Up

Toward the end of his life, Somerset Maugham summed up his “take-aways” in a book aptly titled “The Summing Up.” I also feel an urge to sum up, to take stock of what I have achieved and attempted to achieve. This urge is, of course, a bit silly in my case. For one thing, I clearly achieved nothing compared to Maugham; even considering that he was a lot older when he summed up his stuff and had more time achieve things. Secondly, Maugham could express his take on life, universe and everything much better than I will ever be able to. These drawbacks notwithstanding, I will take a stab at it myself because I have begun to feel the nearness of an arrival — kind of like what you feel in the last hours of a long haul flight. I feel as though whatever I have set out to do, whether I have achieved it or not, is already behind me. Now is probably as good a time as any to ask myself — what is it that I set out to do?

I think my main goal in life was to know things. In the beginning, it was physical things like radios and television. I still remember the thrill of finding the first six volumes of “Basic Radio” in my dad’s book collection, although I had no chance of understanding what they said at that point in time. It was a thrill that took me through my undergrad years. Later on, my focus moved on to more fundamental things like matter, atoms, light, particles, physics etc. Then on to mind and brain, space and time, perception and reality, life and death — issues that are most profound and most important, but paradoxically, least significant. At this point in my life, where I’m taking stock of what I have done, I have to ask myself, was it worth it? Did I do well, or did I do poorly?

Looking back at my life so far now, I have many things to be happy about, and may others that I’m not so proud of. Good news first — I have come a long a way from where I started off. I grew up in a middle-class family in the seventies in India. Indian middle class in the seventies would be poor by any sensible world standards. And poverty was all around me, with classmates dropping out of school to engage in menial child labor like carrying mud and cousins who could not afford one square meal a day. Poverty was not a hypothetical condition afflicting unknown souls in distant lands, but it was a painful and palpable reality all around me, a reality I escaped by blind luck. From there, I managed to claw my way to an upper-middle-class existence in Singapore, which is rich by most global standards. This journey, most of which can be attributed to blind luck in terms of genetic accidents (such as academic intelligence) or other lucky breaks, is an interesting one in its own right. I think I should be able to put a humorous spin on it and blog it up some day. Although it is silly to take credit for accidental glories of this kind, I would be less than honest if I said I wasn’t proud of it.

How Should I Die?

I have reached the age where I have seen a few deaths. And I have had time to think about it a bit. I feel the most important thing is to die with dignity. The advances in modern medicine, though effective in keeping us alive longer, may rob us of the dignity with which we would like to go. The focus is on keeping the patient alive. But the fact of the matter is that everybody will die. So medicine will lose the battle, and it is a sore loser. That’s why the statements like “Cancer is the biggest killer” etc. are, to some extent, meaningless. When we figure out how to prevent deaths from common colds and other infections, heart disease begins to claim a relatively larger share of deaths. When we beat the heart disease, cancer becomes the biggest killer, not so much because it is now more prevalent or virulent, but in the zero-sum game of life and death, it had to.

The focus on the quantity of life diminishes its quality near its tail end due to a host of social and ethical considerations. Doctors are bound by their professional covenants to offer us the best care we ask for (provided, of course, that we can afford it). The “best care” usually means the one that will keep us alive the longest. The tricky part is that it has become an entrenched part of the system, and the default choice that will be made for us — at times even despite our express wishes to the contrary.

Consider the situation when an aged and fond relative of ours falls terminally sick. The relative is no longer in control of the medical choices; we make the choices for them. Our well-meaning intentions make us choose exactly the “best care” regardless of whether the patient has made different end-of-life choices.

The situation is further complicated by other factors. The terminal nature of the sickness may not be apparent at the outset. How are we supposed to decide whether the end-of-life choices apply when even the doctors may not know? Besides, in those dark hours, we are understandably upset and stressed, and our decisions are not always rational and well-considered. Lastly, the validity of the end-of-life choices may be called into question. How sure are we that our dying relative hasn’t changed their mind? It is impossible for any of us to put ourselves in their shoes. Consider my case. I may have made it abundantly clear now that I do not want any aggressive prolongation of my life, but when I make that decision, I am healthy. Toward the end, lying comatose in a hospital bed, I may be screaming in my mind, “Please, please, don’t pull the plug!” How do we really know that we should be bound by the decisions we took under drastically different circumstances?

I have no easy answers here. However, we do have some answers from the experts — the doctors. How do they choose to die? May be we can learn something from them. I for one would like to go the way the doctors choose to go.

Why Do We Drink?

We get in trouble or at least embarrass ourselves once in a while because of our drinking. Why do we still do it? Ok, it is fun to have a drink or two at a party — it gives you a buzz, loosens your tongue, breaks the ice etc. But most of us go way beyond that. We almost always end up regretting it the next morning. But we still do it.

Alcohol actually tastes bad, and we have to add all kinds of sodas and fruit juices to mask it. It is a depressant, so if we drink it when we are sad, it makes us sadder. It is toxic to our liver, kills our brain cells and makes us do silly things like puke and generally make an ass of ourselves. But, by and large, most people who can get their hands on it, drink it.

I am not talking about alcoholics who have trouble controlling their urges (although I believe most of us are budding alcoholics). I am not even talking about why we start drinking — that could be because of peer pressure, teenage dares, curiosity etc. I’m talking about those of us who continue drinking long after that sweet buzz that alcohol used to give us is history.

I do have a theory why we drink. But I have to warn you — my theory is a bit looney, even by the generous standards of this Unreal Blog. I think we drink because it alters our sense of reality. You see, although we don’t usually articulate it or even consciously know it, we feel that there is something wrong with the physical reality we find ourselves in. It is like a tenuous veil surrounding us that disappears the moment we look at it, but undulates beyond the periphery of our vision giving us fleeing glimpses of its existence in our unguarded moments. Perhaps, if we can let our guard down, may be we can catch it. This vain and unconscious hope is probably behind our doomed attractions toward alcohol and other hallucinants.

Although the veil of reality is tenuous, its grip on us is anything but. Its laws dictate our every movement and action, and literally pull us down and keep us grounded. I think our minds, unwilling to be subjugated to any physical laws, rebel against them. Could this be behind our teenagers’ infatuation with Stephenie Meyer’s vampire stories and Harry Potter’s magic? Isn’t this why we love our superheroes from our childhood days? Do we not actually feel a bit liberated when Neo (The One in Matrix) shows that physical rules don’t apply to him? Why do you think what we worship are the miracles and the supernatural?

Well, may be I am just trying to find philosophical reasons to get sozzled. Honestly, I’m feeling a bit thirsty.

Seeing and Believing

When we open our eyes and look at some thing, we see that damn thing. What could be more obvious than that, right? Let’s say you are looking at your dog. What you see is really your dog, because, if you want, you can reach out and touch it. It barks, and you can hear the woof. If it stinks a bit, you can smell it. All these extra perceptual clues corroborate your belief that what you are seeing is your dog. Directly. No questions asked.

Of course, my job on this blog is to ask questions, and cast doubts. First of all, seeing and touching seem to be a bit different from hearing and smelling. You don’t strictly hear your dog bark, you hear its sound. Similarly, you don’t smell it directly, you smell the odor, the chemical trail the dog has left in the air. Hearing and smelling are three place perceptions — the dog generates sound/odor, the sound/odor travels to you, you perceive the sound/odor.

But seeing (or touching) is a two place thing — the dog there, and you here perceiving it directly. Why is that? Why do we feel that when we see or touch something, we sense it directly? This belief in the perceptual veracity of what we see is called naive realism. We of course know that seeing involves light (so does touching, but in a much more complicated way), what we are seeing is the light reflected off an object and so on. It is, in fact, no different from hearing something. But this knowledge of the mechanism of seeing doesn’t alter our natural, commonsense view that what we see is what is out there. Seeing is believing.

Extrapolated from the naive version is the scientific realism, which asserts that our scientific concepts are also real, eventhough we may not directly perceive them. So atoms are real. Electrons are real. Quarks are real. Most of our better scientists out there have been skeptical about this extraploation to our notion of what is real. Einstein, probably the best of them, suspected that even space and time might not be real. Feynman and Gell-Mann, after developing theories on electrons and quarks, expressed their view that electrons and quarks might be mathematical constructs rather than real entities.

What I am inviting you to do here is to go beyond the skepticism of Feynman and Gell-Mann, and delve into Einstein’s words — space and time are modes by which we think, not conditions in which we live. The sense of space is so real to us that we think of everything else as interactions taking place in the arena of space (and time). But space itself is the experience corresponding to the electrical signals generated by the light hitting your retina. It is a perceptual construct, much like the tonality of the sound you hear when air pressure waves hit your ear drums. Our adoption of naive realism results in our complete trust in the three dimensional space view. And since the world is created (in our brain as perceptual constructs) based on light, its speed becomes an all important constant in our world. And since speed mixes space and time, a better description is found in a four dimensional Minkowski geometry. But all these descriptions are based on perceptual experiences and therefore unreal in some sense.

I know the description above is highly circular — I talked about space being a mental construct created by light traveling through, get this, space. And when I speak of its speed, naturally, I’m talking about distance in space divided by time, and positing as the basis for the space-time mixing. This circularity makes my description less than clear and convincing. But the difficulty goes deeper than that. You see, all we have is this cognitive construct of space and time. We can describe objects and events only in terms of these constructs even when we know that they are only cognitive representations of sensory signals. Our language doesn’t go beyond that. Well, it does, but then we will be talking the language, for instance, of Advaita, calling the constructs Maya and the causes behind them Brahman, which stays unknowable. Or, we will be using some other parallel descriptions. These descriptions may be profound, wise and accurate. But ultimately, they are also useless.

But if philosophy is your thing, the discussions of cognitive constructs and unknown causations are not at all useless. Philosophy of physics happens to be my thing, and so I ask myself — what if I assume the unknown physical causes exist in a world similar to our perceptual construct? I could then propagate the causes through the process of perception and figure out what the construct should look like. I know, it sounds a bit complex, but it is something that we do all the time. We know, for instance, that the stars that we see in the night sky are not really there — we are seeing them the way they were a few (or a few million or billion) years ago because the light from them takes a long time to reach us. Physicists also know that the perceived motion of celestial objects also need to be corrected for these light-travel-time effects.

In fact, Einstein used the light travel time effects as the basis for deriving his special theory of relativity. He then stipulated that space and time behave the way we perceive them, derived using the said light-travel-time effects. This, of course, is based on his deep understanding that space and time are “the modes by which we think,” but also based on the assumption that the the causes behind the modes also are similar to the modes themselves. This depth of thinking is lost on the lesser scientists that came after him. The distinction between the modes of thinking and their causation is also lost, so that space and time have become entities that obey strange rules. Like bent spoons.

Photo by General Press1

Deferred Satisfaction

The mother was getting annoyed that her teenaged son was wasting time watching TV.
“Son, don’t waste your time watching TV. You should be studying,” she advised.
“Why?” quipped the son, as teenagers usually do.
“Well, if you study hard, you will get good grades.”
“Yeah, so?”
“Then, you can get into a good school.”
“Why should I?”
“That way, you can hope to get a good job.”
“Why? What do I want with a good job?”
“Well, you can make a lot of money that way.”
“Why do I want money?”
“If you have enough money, you can sit back and relax. Watch TV whenever you want to.”
“Well, I’m doing it right now!”

What the mother is advocating, of course, is the wise principle of deferred satisfaction. It doesn’t matter if you have to do something slightly unpleasant now, as long as you get rewarded for it later in life. This principle is so much a part of our moral fabric that we take it for granted, never questioning its wisdom. Because of our trust in it, we obediently take bitter medicines when we fall sick, knowing that we will feel better later on. We silently submit ourselves to jabs, root-canals, colonoscopies and other atrocities done to our persons because we have learned to tolerate unpleasantnesses in anticipation of future rewards. We even work like a dog at jobs so loathesome that they really have to pay us a pretty penny to stick it out.

Before I discredit myself, let me make it very clear that I do believe in the wisdom of deferred satisfaction. I just want to take a closer look because my belief, or the belief of seven billion people for that matter, is still no proof of the logical rightness of any principle.

The way we lead our lives these days is based on what they call hedonism. I know that the word has a negative connotation, but that is not the sense in which I am using it here. Hedonism is the principle that any decision we take in life is based on how much pain and pleasure it is going to create. If there is an excess of pleasure over pain, then it is the right decision. Although we are not considering it, the case where the recipients of the pain and pleasure are distinct individuals, nobility or selfishness is involved in the decision. So the aim of a good life is to maximize this excess of pleasure over pain. Viewed in this context, the principle of delayed satisfaction makes sense — it is one good strategy to maximize the excess.

But we have to be careful about how much to delay the satisfaction. Clearly, if we wait for too long, all the satisfaction credit we accumulate will go wasted because we may die before we have a chance to draw upon it. This realization may be behind the mantra “live in the present moment.”

Where hedonism falls short is in the fact that it fails to consider the quality of the pleasure. That is where it gets its bad connotation from. For instance, a ponzi scheme master like Madoff probably made the right decisions because they enjoyed long periods of luxurious opulence at the cost of a relatively short durations of pain in prison.

What is needed, perhaps, is another measure of the rightness of our choices. I think it is in the intrinsic quality of the choice itself. We do something because we know that it is good.

I am, of course, touching upon the vast branch of philosophy they call ethics. It is not possible to summarize it in a couple of blog posts. Nor am I qualified enough to do so. Michael Sandel, on the other hand, is eminently qualified, and you should check out his online course Justice: What is the Right Thing to Do? if interested. I just want to share my thought that there is something like the intrinsic quality of a way of life, or of choices and decisions. We all know it because it comes before our intellectual analysis. We do the right thing not so much because it gives us an excess of pleasure over pain, but we know what the right thing is and have an innate need to do it.

That, at least, is the theory. But, of late, I’m beginning to wonder whether the whole right-wrong, good-evil distinction is an elaborate ruse to keep some simple-minded folks in check, while the smarter ones keep enjoying totally hedonistic (using it with all the pejorative connotation now) pleasures of life. Why should I be good while the rest of them seem to be reveling in wall-to-wall fun? Is it my decaying internal quality talking, or am I just getting a bit smarter? I think what is confusing me, and probably you as well, is the small distance between pleasure and happiness. Doing the right thing results in happiness. Eating a good lunch results in pleasure. When Richard Feynman wrote about The Pleasure of Finding Things Out, he was probably talking about happiness. When I read that book, what I’m experiencing is probably closer to mere pleasure. Watching TV is probably pleasure. Writing this post, on the other hand, is probably closer to happiness. At least, I hope so.

To come back my little story above, what could the mother say to her TV-watching son to impress upon him the wisdom of deferred satisfaction? Well, just about the only thing I can think of is the argument from hedonism saying that if the son wastes his time now watching TV, there is a very real possibility that he may not be able to afford a TV later on in life. Perhaps intrinsically good parents won’t let their children grow up into a TV-less adulthood. I suspect I would, because I believe in the intrinsic goodness of taking responsibility for one’s actions and consequences. Does that make me a bad parent? Is it the right thing to do? Need we ask anyone to tell us these things?

My Life, My Way

After almost eight years in banking, I have finally called it quits. Over the last three of those years, I had been telling people that I was leaving. And I think people had stopped taking me seriously. My wife certainly did, and it came as a major shock to her. But despite her studied opposition, I managed to pull it off. In fact, it is not just banking that I left, I have actually retired. Most of my friends greeted the news of my retirement with a mixture of envy and disbelief. The power to surprise — it is nice to still have that power.

Why is it a surprise really? Why would anyone think that it is insane to walk away from a career like mine? Insanity is in doing the same thing over and over and expecting different results. Millions of people do the same insanely crummy stuff over and over, everyone of them wanting nothing more than to stop doing it, even planning on it only to postpone their plans for one silly reason or another. I guess the force of habit in doing the crummy stuff is greater than the fear of change. There is a gulf between what people say their plans are and what they end up doing, which is the theme of that disturbing movie Revolutionary Road. This gulf is extremely narrow in my case. I set out with a bunch of small targets — to help a few people, to make a modest fortune, to provide reasonable comfort and security to those near. I have achieved them, and now it is time to stop. The trouble with all such targets is that once you get close to them, they look mundane, and nothing is ever enough for most people. Not for me though — I have always been reckless enough to stick to my plans.

One of the early instances of such a reckless action came during my undergraduate years at IIT Madras. I was pretty smart academically, especially in physics. But I wasn’t too good in remembering details like the names of theorems. Once, this eccentric professor of mine at IIT asked me the name of a particular theorem relating the line integral of the electric field around a point and the charge contained within. I think the answer was Green’s theorem, while its 3-D equivalent (surface integral) is called Gauss’s theorem or something. (Sorry, my Wikipedia and Google searches didn’t bring up anything definitive on that.) I answered Gauss’s theorem. The professor looked at me for a long moment with contempt in his eyes and said (in Tamil) something like I needed to get a beating with his slippers. I still remember standing there in my Khakki workshop attire and listening to him, with my face burning with shame and impotent anger. And, although physics was my favorite subject (my first love, in fact, as I keep saying, mostly to annoy my wife), I didn’t go back to any of his lectures after that. I guess even at that young age, I had this disturbing level of recklessness in me. I now know why. It’s is the ingrained conviction that nothing really matters. Nothing ever did, as Meursault the Stranger points out in his last bout of eloquence.

I left banking for a variety of reasons; remuneration wasn’t one of them, but recklessness perhaps was. I had some philosophical misgivings about the rightness of what I was doing at a bank. I suffered from a troubled conscience. Philosophical reasons are strange beasts — they lead to concrete actions, often disturbing ones. Albert Camus (in his collection The Myth of Sisyphus) warned of it while talking about the absurdity of life. Robert Pirsig in his epilog to Zen and the Art of Motorcycle Maintenance also talked about when such musings became psychiatrically dangerous. Michael Sandel is another wise man who, in his famous lectures on Justice: What is the Right Thing to Do? pointed out that philosophy could often color your perspective permanently — you cannot unlearn it to go back, you cannot unthink a thought to become normal again.

Philosophy and recklessness aside, the other primary reason for leaving the job was boredom. The job got so colossally boring. Looking out my window at the traffic 13 floors below was infinitely more rewarding than looking at the work on my three computer screens. And so I spent half my time staring out the window. Of course, my performance dwindled as a result. I guess scuttling the performance is the only way to realistically make oneself leave a high-paying job. There are times when you have have to burn the bridges behind you. Looking back at it now, I cannot really understand why I was so bored. I was a quantitative developer and the job involved developing reports and tools. Coding is what I do for fun at home. That and writing, of course. May be the boredom came from the fact that there was no serious intellectual content in it. There was none in the tasks, nor in the company of the throngs of ambitious colleagues. Walking into the workplace every morning, looking at all the highly paid people walking around with impressive demeanors of doing something important, I used to feel almost sad. How important could their bean-counting ever be?

Then again, how important could this blogging be? We get back to Meursault’s tirade – rien n’avait d’importance. Perhaps I was wrong to have thrown it away, as all of them keep telling me. Perhaps those important-looking colleagues were really important, and I was the one in the wrong to have retired. That also matters little; that also has little importance, as Meursault and my alter ego would see it.

What next is the question that keeps coming up. I am tempted to give the same tongue-in-cheek answer as Larry Darrell in The Razor’s Edge — Loaf! My kind of loafing would involve a lot of thinking, a lot of studying, and hard work. There is so much to know, and so little time left to learn.

Photo by kenteegardin

Rules of Conflicts

In this last post in the rules of the game series, we look at the creative use of the rules in a couple of situations. Rules can be used to create productive and predictable conflicts. One such conflict is in law enforcement, where cops hate defense attorneys — if we are to believe Michael Connelly’s depiction of how things work at LAPD. It is not as if they are really working against each other, although it may look that way. Both of them are working toward implementing a set of rules that will lead to justice for all, while avoiding power concentration and corruption. The best way of doing it happens to be by creating a perpetual conflict, which also happens to be fodder for Connelly’s work.

Another conflict of this kind can be seen in a bank, between the risk taking arm (traders in the front office) and the risk controlling teams (market and credit risk managers in the middle office). The incessant strife between them, in fact, ends up implementing the risk appetite of the bank as decided by the senior management. When the conflict is missing, problems can arise. For a trader, performance is quantified in terms of the profit (and to a lesser degree, its volatility) generated by him. This scheme seems to align the trader’s interests with those of the bank, thus generating a positive feedback loop. As any electrical engineer will tell you, positive feedback leads to instability, while negative feedback (conflict driven modes) leads to stable configurations. The positive feedback results in rogue traders engaging in huge unauthorized trades leading to enormous damages or actual collapses like the Bearings bank in 1995.

We can find other instances of reinforcing feedback generating explosive situations in upper management of large corporates. The high level managers, being board members in multiple corporate entities, keep supporting each other’s insane salary expectations, thus creating an unhealthy positive feedback. If the shareholders, on the other hand, decided the salary packages, their own self-interest of minimizing expenses and increasing the dividend (and the implicit conflict) would have generated a more moderate equilibrium.

The rule of conflict is at work at much larger scales as well. In a democracy, political parties often assume conflicting world views and agendas. Their conflict, ratified through the electoral process, ends up reflecting the median popular view, which is the way it should be. It is when their conflicting views become so hopelessly polarized (as they seem to be in the US politics these days) that we need to worry. Even more of a worry would be when one side of the conflict disappears or gets so thoroughly beaten. In an earlier post, I lamented about just that kind of one-sidedness in the idealogical struggle between capitalism and socialism.

Conflicts are not limited to such large settings or to our corporate life and detective stories. The most common conflict is in the work-life balance that all of us struggle with. The issue is simple — we need to work to make a living, and work harder and longer to make a better living. In order to give the best to our loved ones, we put so much into our work that we end up sacrificing our time with the very loved ones we are supposedly working for. Of course, there is a bit of hypocrisy when most workaholics choose work over life — they do it, not so much for their loved ones, but for a glorification, a justification or a validation of their existence. It is an unknown and unseen angst that is driving them. Getting the elusive work-live conflict right often necessitates an appreciation of that angst, and unconventional choices. At times, in order to win, you have to break the rules of the game.