Accents

Indians pronounce the word “poem” as poyem. Today, my daughter wrote one for her friend’s birthday and she told me about her “poyem”. So I corrected her and asked her to say it as po-em, despite the fact that I also say it the Indian way during my unguarded moments. That got me thinking — why do we say it that way? I guess it is because certain diphthongs are unnatural in Indian languages. “OE” is not a natural thing to say, so we invent a consonant in between.

The French also do this. I had this funny conversation with a French colleague of mine at Geneva airport long time ago during my CERN days. Waiting at the airport lounge, we were making small talk. The conversation turned to food, as French conversations often do (although we were speaking in English at that time). My colleague made a strange statement, “I hate chicken.” I expressed my surprise told her that I was rather fond of white meat. She said, “Non, non, I hate chicken for lunch.” I found it even stranger. Was it okay for dinner then? Poultry improved its appeal after sunset? She clarified further, “Non, non, non. I hate chicken for lunch today.”

I said to myself, “Relax, you can solve this mystery. You are a smart fellow, CERN scientist and whatnot,” and set to work. Sure enough, a couple of minutes of deep thinking revealed the truth behind the French conundrum. She had chicken for lunch that day. The “IA” as in “I ate” is not a natural diphthong for the French, and they insert an H in between, which is totally strange because the French never say H (or the last fourteen letters of any given word, for that matter.) H is a particularly shunned sound — they refuse to say it even when they are asked to. The best they can do is to aspirate it as in the textbook example of “les haricots”. But when they shouldn’t say it, they do it with surprising alacrity. I guess alacrity is something we all readily find when it comes to things that we shouldn’t be doing.

Subprime: When Good Intentions Turned Sour

As the world is still feeling the reverberations of the 2008 global financial crisis, a lot of blame has been directed onto subprime lending, and in particular mortgages, as the cause of much of the economic meltdown. However, subprime mortgages had their basis in good intentions, with their initial introduction aimed at helping less well-off families from getting on the housing ladder.

In 1999, Bill Clinton, perhaps the United States’ most equality driven president, asked the now notorious Fannie Mae, which at the time was America’s largest underwriter of home mortgages, to expand home loans to more people on low and moderate incomes. At the time, this was seen as a rather egalitarian move, allowing people without access to normal levels of borrowing, such as a credit card balance transfer or an overdraft, to buy their own home.

Reducing Risk

The problem with subprime loans was not in the initial intentions, but in the way banks tried to limit the risk that lending money to people with low incomes posed, whilst at the same time, make more money on the transaction. Of course, any money lent on a property has good security in the bricks and mortar. However, getting a return on any mortgage takes years and banks wanted a quicker way of making money on these subprime loans, while also limiting any potential risk. So they turned to what was seen as one of the greatest financial innovations of the 20th century, securitization.

Securitization is when banks bundle up loans into saleable packages, which they sell on to one another. Because the interest on subprime loans was much higher than normal loans, due to the credit risk of the borrowers, these securitized subprime packages offered large long-term rewards for buyers. By selling the loans on, banks reduced their risk, while at the same time the buyers of securities had the promise of a valuable long-term investment, which was seen as a win-win situation for all.

Rise and Fall of the Subprime

Banks began lending money to poorer and poorer people, tempting them with low initial interest rates. As a result, subprime lending doubled in just over a year, and because housing prices continued to rise, even those lenders the banks knew could never afford to continue repayments, the repossession rates would cover the loan and remove any risk. Until house prices fell.

Fuelled by record foreclosures as subprime borrowers failed to maintain their payments, house prices plummeted, as did confidence in subprime securities. What was once seen as a win-win investment opportunity now became toxic. The result was the loss of billions in the banking industry and the start of the worst financial crisis in 80 years. However, the victims in all this wasn’t the failed banks and investors left with worthless securities, but the innocent, low-income home owner, who, thanks to the greed and impatience of bankers and investors, ended up losing their homes.

Betting on Failure is no Way to run an Economy

The most powerful word in the financial markets is confidence. No matter how well or how badly a business, security or even a currency is doing, if it can maintain market confidence, the price will remain high. If the market for some reason loses confidence, then regardless of what the balance sheet says, prices plummet. So, with confidence playing such an important part of maintaining a healthy economy, the betting on failure could be argued to be one of the most toxic aspects of the entire economic system.

Hedge Funds

The phrase, “hedging your bets,” derives from bookmakers, who, when faced with a customer placing a substantial wager, hedge the bet, by placing their own wager elsewhere. And hedge funds are meant to work in the same way, reducing risk by insuring investments against failure.

Hedge funds deal in derivatives, an insurance contract that pays out when things go wrong. Derivatives are not necessarily a bad thing, companies and banks need some form of system to protect themselves from unforeseen events. However, during the global financial crisis, the derivatives market was poorly regulated, allowing a web of interlinked derivatives to accumulate across the insurance and financial services industry. As soon as a problem, such as the subprime crisis developed, financial insurers were left having to pay out huge sums. They then tried to recoup these losses by buying even more derivatives, causing a spiral of accumulating debt.

In the meantime, hedge funds were making so much money by betting on failures, their very presence was driving down confidence and leading to even more failures, which generated even more profits for the hedge funds and even more debt for the financial institutions. This resulted in the huge bailout figures that governments around the world have had to pay. The only winners in this whole fiasco have been the hedge fund companies. While derivatives and hedge funds didn’t start the financial crisis, this toxic investment in failure, certainly escalated it.

Bye Bye Einstein

Starting from his miraculous year of 1905, Einstein has dominated physics with his astonishing insights on space and time, and on mass and gravity. True, there have been other physicists who, with their own brilliance, have shaped and moved modern physics in directions that even Einstein couldn’t have foreseen; and I don’t mean to trivialize neither their intellectual achievements nor our giant leaps in physics and technology. But all of modern physics, even the bizarre reality of quantum mechanics, which Einstein himself couldn’t quite come to terms with, is built on his insights. It is on his shoulders that those who came after him stood for over a century now.

One of the brighter ones among those who came after Einstein cautioned us to guard against our blind faith in the infallibility of old masters. Taking my cue from that insight, I, for one, think that Einstein’s century is behind us now. I know, coming from a non-practicing physicist, who sold his soul to the finance industry, this declaration sounds crazy. Delusional even. But I do have my reasons to see Einstein’s ideas go.

[animation]Let’s start with this picture of a dot flying along a straight line (on the ceiling, so to speak). You are standing at the centre of the line in the bottom (on the floor, that is). If the dot was moving faster than light, how would you see it? Well, you wouldn’t see anything at all until the first ray of light from the dot reaches you. As the animation shows, the first ray will reach you when the dot is somewhere almost directly above you. The next rays you would see actually come from two different points in the line of flight of the dot — one before the first point, and one after. Thus, the way you would see it is, incredible as it may seem to you at first, as one dot appearing out of nowhere and then splitting and moving rather symmetrically away from that point. (It is just that the dot is flying so fast that by the time you get to see it, it is already gone past you, and the rays from both behind and ahead reach you at the same instant in time.Hope that statement makes it clearer, rather than more confusing.).

[animation]Why did I start with this animation of how the illusion of a symmetric object can happen? Well, we see a lot of active symmetric structures in the universe. For instance, look at this picture of Cygnus A. There is a “core” from which seem to emanate “features” that float away to the “lobes.” Doesn’t it look remarkably similar to what we would see based on the animation above? There are other examples in which some feature points or knots seem to move away from the core where they first appear at. We could come up with a clever model based on superluminality and how it would create illusionary symmetric objects in the heavens. We could, but nobody would believe us — because of Einstein. I know this — I tried to get my old physicist friends to consider this model. The response is always some variant of this, “Interesting, but it cannot work. It violates Lorentz invariance, doesn’t it?” LV being physics talk for Einstein’s insistence that nothing should go faster than light. Now that neutrinos can violate LV, why not me?

Of course, if it was only a qualitative agreement between symmetric shapes and superluminal celestial objects, my physics friends are right in ignoring me. There is much more. The lobes in Cygnus A, for instance, emit radiation in the radio frequency range. In fact, the sky as seen from a radio telescope looks materially different from what we see from an optical telescope. I could show that the spectral evolution of the radiation from this superluminal object fitted nicely with AGNs and another class of astrophysical phenomena, hitherto considered unrelated, called gamma ray bursts. In fact, I managed to publish this model a while ago under the title, “Are Radio Sources and Gamma Ray Bursts Luminal Booms?“.

You see, I need superluminality. Einstein being wrong is a pre-requisite of my being right. So it is the most respected scientist ever vs. yours faithfully, a blogger of the unreal kind. You do the math. 🙂

Such long odds, however, have never discouraged me, and I always rush in where the wiser angels fear to tread. So let me point out a couple of inconsistencies in SR. The derivation of the theory starts off by pointing out the effects of light travel time in time measurements. And later on in the theory, the distortions due to light travel time effects become part of the properties of space and time. (In fact, light travel time effects will make it impossible to have a superluminal dot on a ceiling, as in my animation above — not even a virtual one, where you take a laser pointer and turn it fast enough that the laser dot on the ceiling would move faster than light. It won’t.) But, as the theory is understood and practiced now, the light travel time effects are to be applied on top of the space and time distortions (which were due to the light travel time effects to begin with)! Physicists turn a blind eye to this glaring inconstancy because SR “works” — as I made very clear in my previous post in this series.

Another philosophical problem with the theory is that it is not testable. I know, I alluded to a large body of proof in its favor, but fundamentally, the special theory of relativity makes predictions about a uniformly moving frame of reference in the absence of gravity. There is no such thing. Even if there was, in order to verify the predictions (that a moving clock runs slower as in the twin paradox, for instance), you have to have acceleration somewhere in the verification process. Two clocks will have to come back to the same point to compare time. The moment you do that, at least one of the clocks has accelerated, and the proponents of the theory would say, “Ah, there is no problem here, the symmetry between the clocks is broken because of the acceleration.” People have argued back and forth about such thought experiments for an entire century, so I don’t want to get into it. I just want to point out that theory by itself is untestable, which should also mean that it is unprovable. Now that there is direct experimental evidence against the theory, may be people will take a closer look at these inconsistencies and decide that it is time to say bye-bye to Einstein.

Why not Discard Special Relativity?

Nothing would satisfy my anarchical mind more than to see the Special Theory of Relativity (SR) come tumbling down. In fact, I believe that there are compelling reasons to consider SR inaccurate, if not actually wrong, although the physics community would have none of that. I will list my misgivings vis-a-vis SR and present my case against it as the last post in this series, but in this one, I would like to explore why it is so difficult to toss SR out the window.

The special theory of relativity is an extremely well-tested theory. Despite my personal reservations about it, the body of proof for the validity of SR is really enormous and the theory has stood the test of time — at least so far. But it is the integration of SR into the rest of modern physics that makes it all but impossible to write it off as a failed theory. In experimental high energy physics, for instance, we compute the rest mass of a particle as its identifying statistical signature. The way it works is this: in order to discover a heavy particle, you first detect its daughter particles (decay products, that is), measure their energies and momenta, add them up (as “4-vectors”), and compute the invariant mass of the system as the modulus of the aggregate energy-momentum vector. In accordance with SR, the invariant mass is the rest mass of the parent particle. You do this for many thousands of times and make a distribution (a “histogram”) and detect any statistically significant excess at any mass. Such an excess is the signature of the parent particle at that mass.

Almost every one of the particles in the particle data book that we know and love is detected using some variant of this method. So the whole Standard Model of particle physics is built on SR. In fact, almost all of modern physics (physics of the 20th century) is built on it. On the theory side, in the thirties, Dirac derived a framework to describe electrons. It combined SR and quantum mechanics in an elegant framework and predicted the existence of positrons, which bore out later on. Although considered incomplete because of its lack of sound physical backdrop, this “second quantization” and its subsequent experimental verification can be rightly seen as evidence for the rightness of SR.

Feynman took it further and completed the quantum electrodynamics (QED), which has been the most rigorously tested theory ever. To digress a bit, Feynman was once being shown around at CERN, and the guide (probably a prominent physicist himself) was explaining the experiments, their objectives etc. Then the guide suddenly remembered who he was talking to; after all, most of the CERN experiments were based on Feynman’s QED. Embarrassed, he said, “Of course, Dr. Feynman, you know all this. These are all to verify your predictions.” Feynman quipped, “Why, you don’t trust me?!” To get back to my point and reiterate it, the whole edifice of the standard model of particle physics is built on top of SR. Its success alone is enough to make it impossible for modern physics to discard SR.

So, if you take away SR, you don’t have the Standard Model and QED, and you don’t know how accelerator experiments and nuclear bombs work. The fact that they do is proof enough for the validity of SR, because the alternative (that we managed to build all these things without really knowing how they work) is just too weird. It’s not just the exotic (nuclear weaponry and CERN experiments), but the mundane that should convince us. Fluorescent lighting, laser pointers, LED, computers, mobile phones, GPS navigators, iPads — in short, all of modern technology is, in some way, a confirmation of SR.

So the OPERA result on observed superluminalily has to be wrong. But I would like it to be right. And I will explain why in my next post. Why everything we accept as a verification of SR could be a case of mass delusion — almost literally. Stay tuned!

Faster than Light

CERN has published news about some subatomic particles exceeding the speed of light, according to BBC and other sources. If confirmed true, this will remove the linchpin of modern physics — it is hard to overstate how revolutionary this discovery would be to our collective understanding of world we live in, from finest structure of matter to the time evolution of the cosmos. My own anarchical mind revels at the thought of all of modern physics getting rewritten, but I also have a much more personal stake in this story. I will get to it later in this series of posts. First, I want to describe the backdrop of thought that led to the notion that the speed of light could not be breached. The soundness of that scientific backdrop (if not the actual conclusion about the inviolability of light-speed) makes it very difficult to forgo the intellectual achievements of the past one hundred years in physics, which is what we will be doing once we confirm this result. In my second post, I will list what these intellectual achievements are, and how drastically their form will have to change. The scientists who discovered the speed violation, of course, understand this only too well, which is why they are practically begging the rest of the physics community to find a mistake in this discovery of theirs. As it often happens in physics, if you look for something hard enough, you are sure to find it — this is the experimental bias that all experimental physicists worth their salt are aware of and battle against. I hope a false negation doesn’t happen, for, as I will describe in my third post in this series, if confirmed, this speed violation is of tremendous personal importance to me.

The constancy (and the resultant inviolability) of the speed of light, of course, comes from Einstein’s Special Theory of Relativity, or SR. This theory is an extension of a simple idea. In fact, Einstein’s genius is in his ability to carry a simple idea to its logically inevitable, albeit counter-intuitive (to the point of being illogical!) conclusion. In the case of SR, he picks an idea so obvious — that the laws of physics should be independent of the state of motion. If you are in a train going at a constant speed, for instance, you can’t tell whether you are moving or not (if you close the windows, that is). The statement “You can’t tell” can be recast in physics as, “There is no experiment you can device to detect your state of motion.” This should be obvious, right? After all, if the laws kept changing every time you moved about, it is as good as having no laws at all.

Then came Maxwell. He wrote down the equations of electricity and magnetism, thereby elegantly unifying them. The equations state, using fancy vector notations, that a changing magnetic field will create an electric field, and a changing electric field will create a magnetic field, which is roughly how a car alternator and an electric motor work. These elegant equations have a wave solution.

The existence of a wave solution is no surprise, since a changing electric field generates a magnetic field, which in turn generates an electric field, which generates a magnetic filed and so on ad infinitum. What is surprising is the fact that the speed of propagation of this wave predicted by Maxwell’s equations is c, the speed of light. So it was natural to suppose that light was a form of electromagnetic radiation, which means that if you take a magnet and jiggle it fast enough, you will get light moving away from you at c – if we accept that light is indeed EM wave.

What is infinitely more fundamental is the question whether Maxwell’s equations are actually laws of physics. It is hard to argue that they aren’t. Then the follow-up question is whether these equations should obey the axiom that all laws of physics are supposed to obey — namely they should be independent of the state of motion. Again, hard to see why not. Then how do we modify Maxwell’s equations such that they are independent of motion? This is the project Einstein took on under the fancy name, “Covariant formulation of Maxwell’s equations,” and published the most famous physics article ever with an even fancier title, “On the Electrodynamics of Moving Bodies.” We now call it the Special Theory of Relativity, or SR.

To get a bit technical, Maxwell’s equations have the space derivatives of electric and magnetic fields relating to the time derivatives of charges and currents. In other words, space and time are related through the equations. And the wave solution to these equations with the propagation speed of c becomes a constraint on the properties of space and time. This is a simple philosophical look on SR, more than a physics analysis.

Einstein’s approach was to employ a series of thought experiments to establish that you needed a light signal to sync clocks and hypothesize that the speed of light had to be constant in all moving frames of reference. In other words, the speed of light is independent of the state of motion, as it has to be if Maxwell’s equations are to be laws of physics.

This aspect of the theory is supremely counter-intuitive, which is physics lingo to say something is hard to believe. In the case of the speed of light, you take a ray of light, run along with it at a high speed, and measure its speed, you still get c. Run against it and measure it — still c. To achieve this constancy, Einstein rewrote the equations of velocity addition and subtraction. On consequence of these rewritten equations is that nothing can go faster than light.

This is my long-winded description of the context in which the speed violation measured at OPERA has to be seen. If the violation is confirmed, we have a few unpleasant choices to pick from:

  1. Electrodynamics (Maxwell’s equations) is not invariant under motion.
  2. Light is not really electromagnetic in nature.
  3. SR is not the right covariant formulation of electrodynamics.

The first choice is patently unacceptable because it is tantamount to stating that electrodynamics is not physics. A moving motor (e.g., if you take your electric razor on a flight) would behave differently from a static one (you may not be able to shave). The second choice also is quite absurd. In addition to the numeric equality between the speed of the waves from Maxwell’s equations and the measured value of c, we do have other compelling reasons why we should believe that light is EM waves. Radio waves induce electric signals in an antenna, light knocks of electrons, microwaves can excite water molecules and cook food and so on.

The only real choice we are left with is the last one — which is to say SR is wrong. Why not discard SR? More reasons than a blog post can summarize, but I’ll try to summarize them any way in my next post.

How to Avoid Duplicate Imports in iPhoto

For the budding photographer in you, iPhoto is a godsend. It is the iLife photo organization program that comes pre-installed on your swanky new iMac or Mac Book Air. In fact, I would go as far as to say that iPhoto is one of the main reasons to switch to a Mac. I know, there are alternatives, but for seamless integration and smooth-as-silk workflow, iPhoto reigns supreme.

iPhotoTaggerBut (ah, there is always a “but”), the workflow in iPhoto can create a problem for some. It expects you to shoot pictures, connect your camera to your Mac, move the photos from the camera to the Mac, enhance/edit and share (Facebook, flickr) or print or make photo books. This flow (with some face recognition, red-eye removal, event/album creation etc.) works like a charm — if you are just starting out with your new digital camera. What if you already have 20,000 old photos and scans on your old computer (in “My Pictures”)?

This is the problem I was faced with when I started playing with iPhoto. I pride myself in anticipating such problems. So, I decided to import my old library very carefully. While importing “My Pictures” (which was fairly organized to begin with), I went through it folder by folder, dragging-and-dropping them on iPhoto and, at the same time, labeling them (and the photos therein) with what I thought were appropriate colors. (I used the “Get Info” function in Finder for color labels.) I thought I was being clever, but I ended up with a fine (but colorful) mess, with my folders and photos sporting random colors. It looked impossible to compare and figure out and where my 20,000 photos got imported to in iPhoto; so I decided to write my very first Mac App — iPhotoTagger. It took me about a week to write it, but it sorted out my photo worries. Now I want to sell it and make some money.

Here is what it does. It first goes through your iPhoto library and catalogs what you have there. It then scans the folder you specify and compares the photos in there with those in your library. If a photo is found exactly once, it will get a Green label, so that it stands out when you browse to it in your Finder (which is Mac-talk for Windows Explorer). Similarly, if the photo appears more than once in your iPhoto library, it will be tagged in Yellow. And, going the extra-mile, iPhotoTagger will color your folder Green if all the photos within have been imported into your iPhoto library. Those folders that have been partially imported will be tagged Yellow.

The photo comparison is done using Exif data, and is fairly accurate. Note that iPhotoTagger doesn’t modify anything within your iPhoto library. Doing so would be unwise. It merely reads the library to gather information.

This first version (V1.0) is released to test the waters, as it were, and is priced at $1.99. If there is enough interest, I will work on V2.0 with improved performance (using Perl and SQLite, if you must know). I will price it at $2.99. And, if the interest doesn’t wane, a V3.0 (for $3.99) will appear with a proper help file, performance pane, options to choose your own color scheme, SpotLight comments (and, if you must know, probably rewritten in Objective-C). Before you rush to send me money, please know that iPhotoTagger requires Snow Leopard and Lion (OS-X 10.6 and 10.7). If in doubt, you can download the lite version and play with it. It is fully functional, and will create lists of photos/folders to be tagged in Green and Yellow, but won’t actually tag them.

Accidental Writer

I consider myself an accidental writer. Despite the modest success I enjoyed as a published writer and a columnist, writing is not where my talents lie. I wrote my first book because I thought I had something important to say. To be sure, I still believe what I said is pretty important for the world to know, and is getting more relevant in the light of the recent discovery of superluminal neutrinos. But when it comes to writing, such a sense of importance is a bit beside the point. A singer is a singer because he has a good voice and adequate singing talent, not because he knows a good song to sing.

This principle holds good in writing as well. It is not so much what you write about, rather how you write it, that makes you a writer. So writing my first book was hard. I had to learn how to write. And here are some writing tips to my fellow accidental writers.

First of all, you have to have a good grasp of grammar — that goes without saying. In fact, it goes beyond the basic subject-predicate, parts-of-speech, sequence-of-tense kind of rules. These run-of-the-mill rules you can pick up from any standard text book, and Chicago Style Manual, etc. What these books leave out, though, are a couple of simple tips in connecting sentences, closing paragraphs, and even chaining chapters into books. Grammar is usually taught and understood as something that applies at a sentence-level, not to collections of sentences that form a paragraph, chapter, articles or a book.

I have to point out a pattern here; the techie in me won’t let it slide. A vocabulary book would teach you some good words, but that is not enough. Basic grammar tells you how to form good sentences using words. How do you put sentences together to make good prose? May be standard writing courses teach you that, but I haven’t taken any, and it came as a revelation to me when I learned these rules (from a Frenchman, as it happened). It is this high-level meta-grammar that I want to share with you here.

With that dramatic and long-winded introduction, let’s sink our teeth into it. All strong sentences have a subject, an object and an action. In that last sentence, for instance, “all strong sentences” would be the subject, “have” is the action, and the list of categories the object. The subject is the topic of the sentence. The subject of the first sentence in the paragraph is the topic of the paragraph.

All right, this post got published by accident just now — I thought I was updating the draft, but hit the publish button instead. Serves me right, I guess, what with the title “Accidental Writer” 🙂

Let me try to wrap up quickly here. My last teacher didn’t like rules at all, but what can I say, I’m a techie and I need rules. Rule number one is to stay on the topic. If you open a paragraph with “Mary had a little lamb,” Mary is the topic. The second sentence has to be about Mary, if you are to make a decent paragraph. So you could say, “She went everywhere with her lamb” or “Mary also had a cat.” and so on. That way, you would be staying on the topic of Mary.

The second rule is about how you transition from one topic to the next. It has to happen through the object (or the action). Let me illsutrate: “Mary had a little lamb. It followed her everywhere.” Now the topic has switched to the lamb, and you are free to babble on about the lamb now – like, “It was black in color.” What would be wrong is a paragraph like this, “Mary had a little lamb. It followed her everywhere. It was black in color. Mary also had a cat.” The last sentence abruptly switches the topic back to Mary, and this is not good. In this toy example, you may not find it too jarring, but in more complex, real-life sentences, such a switch may be enough to lose your reader. The reason, I think, is that a transition that breaks my second rule makes your reader work a little harder, his brain gets a bit fatigued, and he interprets this fatigue as a boring style of writing.

I can easily modify my example paragraph to be less damaging. Here it is, “Mary had a little lamb. It was black in color. It followed her everywhere. Mary also had a cat.” I switched the second and third sentences, and now it follows the transition rule and  is a better paragraph, I think.

The third rule is something that every book on writing tells you — avoid passive voice.  Avoid it even when it sounds a bit unnatural. In order to make this rule “mine,” I add a proviso — you can use it when you really cannot find another way of switching topics. Consider this paragraph, “I wrote many articles for a newspaper. One of my articles was noticed by an editor of a magazine, and he contacted me.” The passive voice is not too bad for the second sentence there because it keeps the topic on my articles, and the next sentence is probably about an article I’m going to write for this editor. Slightly better would be, “One of my articles caught the attention of…” or “.. caught the eye of …” It is only slightly better because of the need to use multiple “of’s” — something to avoid according to a later rule we may not have time to get to. Anyway, a much better idea would be to rewrite the whole paragraph.

The fourth rule is to avoid weak sentence beginnings, such as “There is/are…”, “It is…” They are almost like passive voice. They cannot stay on a topic as defined in my first couple of rules. Associated with this rule is the awareness that the beginnings of a sentence (especially the first sentence in a paragraph) are the most effective spots in your article. Don’t squander it with anything weak. Even transitional phrases usually placed at the beginning of sentences (“However,” “On the other hand,” “Therefore,” “For example” etc.) would waste the spot. I, therefore, use the trick I just used in this sentence to move the weaker structure away from the beginning.

Knowing the subtle weakness of certain structures may come in handy in certain situations. You could, for instance,  say to your boss, “There is something wrong with the computer,” or “The computer stopped working,” depending on whether it is you or your annoying co-worker who spilled coffee on it. (By the way, did you see how I moved “for instance” away from the beginning of the sentence? And didn’t move “By the way”? These are not accidents, they are choices.)

Native speakers of the language clearly have an advantage. They follow most of these rules naturally when they speak, I think. They may make use of their naturally assimilated sense of good prose by recording what they want to write and later transcribing it. I’m not a native speaker of English, and my grasp on my own native tongue now is so weak that this trick will never work for me even in my mother tongue. The trick I do use to force myself to revise is to actually write (I mean, using a pen, on a sheet of paper) what I want to say. This way, when I type it in, I have one more revision forced upon me. Revisions are easier on a sheet of paper because you can always see what you are trying to revise — it stays put. On a computer screen, when you cut and paste to move something, it momentarily disappears from your field of vision. Even when you type in new stuff, the rest of the paragraph moves around, and it bothers me.

Unfortunately, this article (the second half of it) didn’t benefit from the physical writing process and might turn out to be a weaker piece for it. I’m relying on my writing skills (cultivated by experience) to get it right. But such reliance on one’s ability is usually misplaced for an accidental writer. You see, despite what the title of this post says, good writing is never an accident.

Risk – Wiley FinCAD Webinar

This post is an edited version of my responses in a Webinar panel-discussion organized by Wiley-Finance and FinCAD. The freely available Webcast is linked in the post, and contains responses from the other participants — Paul Wilmott and Espen Huag. An expanded version of this post may later appear as an article in the Wilmott Magazine.

What is Risk?

When we use the word Risk in normal conversation, it has a negative connotation — risk of getting hit by a car, for instance; but not the risk of winning a lottery. In finance, risk is both positive and negative. At times, you want the exposure to a certain kind of risk to counterbalance some other exposure; at times, you are looking for the returns associated with a certain risk. Risk, in this context, is almost identical to the mathematical concept of probability.

But even in finance, you have one kind of risk that is always negative — it is Operational Risk. My professional interest right now is in minimizing the operational risk associated with trading and computational platforms.

How do you measure Risk?

Measuring risk ultimately boils down to estimating the probability of a loss as a function of something — typically the intensity of the loss and time. So it’s like asking — What’s the probability of losing a million dollars or two million dollars tomorrow or the day after?

The question whether we can measure risk is another way of asking whether we can figure out this probability function. In certain cases, we believe we can — in Market Risk, for instance, we have very good models for this function. Credit Risk is different story — although we thought we could measure it, we learned the hard way that we probably could not.

The question how effective the measure is, is, in my view, like asking ourselves, “What do we do with a probability number?” If I do a fancy calculation and tell you that you have 27.3% probability of losing one million tomorrow, what do you do with that piece of information? Probability has a reasonable meaning only a statistical sense, in high-frequency events or large ensembles. Risk events, almost by definition, are low-frequency events and a probability number may have only limited practical use. But as a pricing tool, accurate probability is great, especially when you price instruments with deep market liquidity.

Innovation in Risk Management.

Innovation in Risk comes in two flavors — one is on the risk taking side, which is in pricing, warehousing risk and so on. On this front, we do it well, or at least we think we are doing it well, and innovation in pricing and modeling is active. The flip side of it is, of course, risk management. Here, I think innovation lags actually behind catastrophic events. Once we have a financial crisis, for instance, we do a post-mortem, figure out what went wrong and try to implement safety guards. But the next failure, of course, is going to come from some other, totally, unexpected angle.

What is the role of Risk Management in a bank?

Risk taking and risk management are two aspects of a bank’s day-to-day business. These two aspects seem in conflict with each other, but the conflict is no accident. It is through fine-tuning this conflict that a bank implements its risk appetite. It is like a dynamic equilibrium that can be tweaked as desired.

What is the role of vendors?

In my experience, vendors seem to influence the processes rather than the methodologies of risk management, and indeed of modeling. A vended system, however customizable it may be, comes with its own assumptions about the workflow, lifecycle management etc. The processes built around the system will have to adapt to these assumptions. This is not a bad thing. At the very least, popular vended systems serve to standardize risk management practices.

What is Unreal Blog?

Tell us a little about why you started your blog, and what keeps you motivated about it.

As my writings started appearing in different magazines and newspapers as regular columns, I wanted to collect them in one place — as an anthology of the internet kind, as it were. That’s how my blog was born. The motivation to continue blogging comes from the memory of how my first book, The Unreal Universe, took shape out of the random notes I started writing on scrap books. I believe the ideas that cross anybody’s mind often get forgotten and lost unless they are written down. A blog is a convenient platform to put them down. And, since the blog is rather public, you take some care and effort to express yourself well.

Do you have any plans for the blog in the future?

I will keep blogging, roughly at the rate of one post a week or so. I don’t have any big plans for the blog per se, but I do have some other Internet ideas that may spring from my blog.

Philosophy is usually seen as a very high concept, intellectual subject. Do you think that it can have a greater impact in the world at large?

This is a question that troubled me for a while. And I wrote a post on it, which may answer it to the best of my ability. To repeat myself a bit, philosophy is merely a description of whatever intellectual pursuits that we indulge in. It is just that we don’t often see it that way. For instance, if you are doing physics, you think that you are quite far removed from philosophy. The philosophical spins that you put on a theory in physics is mostly an afterthought, it is believed. But there are instances where you can actually apply philosophy to solve problems in physics, and come up with new theories. This indeed is the theme of my book, The Unreal Universe. It asks the question, if some object flew by faster than the speed of light, what would it look like? With the recent discovery that solid matter does travel faster than light, I feel vindicated and look forward to further developments in physics.

Do you think many college students are attracted to philosophy? What would make them choose to major in it?

In today’s world, I am afraid philosophy is supremely irrelevant. So it may be difficult to get our youngsters interested in philosophy. I feel that one can hope to improve its relevance by pointing out the interconnections between whatever it is that we do and the intellectual aspects behind it. Would that make them choose to major in it? In a world driven by excesses, it may not be enough. Then again, it is world where articulation is often mistaken for accomplishments. Perhaps philosophy can help you articulate better, sound really cool and impress that girl you have been after — to put it crudely.

More seriously, though, what I said about the irrelevance of philosophy can be said about, say, physics as well, despite the fact that it gives you computers and iPads. For instance, when Copernicus came up with the notion that the earth is revolving around the sun rather than the other way round, profound though this revelation was, in what way did it change our daily life? Do you really have to know this piece of information to live your life? This irrelevance of such profound facts and theories bothered scientists like Richard Feynman.

What kind of advice or recommendations would you give to someone who is interested in philosophy, and who would like to start learning more about it?

I started my path toward philosophy via physics. I think philosophy by itself is too detached from anything else that you cannot really start with it. You have to find your way toward it from whatever your work entails, and then expand from there. At least, that’s how I did it, and that way made it very real. When you ask yourself a question like what is space (so that you can understand what it means to say that space contracts, for instance), the answers you get are very relevant. They are not some philosophical gibberish. I think similar paths to relevance exist in all fields. See for example how Pirsig brought out the notion of quality in his work, not as an abstract definition, but as an all-consuming (and eventually dangerous) obsession.

In my view, philosophy is a wrapper around multiple silos of human endeavor. It helps you see the links among seemingly unrelated fields, such as cognitive neuroscience and special relativity. Of what practical use is this knowledge, I cannot tell you. Then again, of what practical use is life itself?

Belle Piece

Here is a French joke that is funny only in French. I present it here as a puzzle to my English-speaking readers.

This colonel in the French army was in the restroom. As he was midway through the business of relieving his bladder, he becomes aware of this tall general standing next to him, and realizes that it is none other than Charles De Gaulle. Now, what do you do when you find yourself a sort of captive audience next to your big boss for a couple of minutes? Well, you have to make smalltalk. So this colonel racks his brain for a suitable subject. Noticing that the restroom is a classy tip-top joint, he ventures:

“Belle piece!” (“Nice room!”)

CDG’s ice-cold tone indicates to him the enormity of the professional error he has just committed:

“Regardez devant vous.” (“Don’t peek!”)