Blank Screen after Hibernate or Sleep?

Okay, the short answer, increase your virtual memory to more than the size of your physical memory.

Long version now. Recently, I had this problem with my PC that it wouldn’t wake up from hibernation or sleep mode properly. The PC itself would be on and churning, but the screen would switch to power save mode, staying blank. The only thing to do at that point would be to restart the computer.

Like the good netizen that I am, I trawled the Internet for a solution. But didn’t find any. Some suggested upgrading the BIOS, replacing the graphics card and so on. Then I saw this mentioned in a Linux group, saying that the size of the swap file should be more than the physical memory, and decided to try it on my Windows XP machine. And it solved the problem!

So the solution to this issue of blank screen after waking up is to set the size of the virtual memory to something larger than the memory in your system. If you need more information, here is how, in step-by-step form. These instructions apply to a Windows XP machine.

  1. Right-click on “My Computer” and hit “Properties.”
  2. Take a look at the RAM size, and click on the “Advanced” tab.
  3. Click on the “Setting” button under the “Performance” group box.
  4. In the “Performance Options” window that comes up, select the “Advanced” tab.
  5. In the “Virtual Memory” group box near the bottom, click on the “Change” button.
  6. In the “Virtual Memory” window that pops up, set the “Custom Size” to something more than your RAM size (that you saw in step 2). You can set it on any hard disk partition that you have, but if you are going through all these instructions, chances are you have only “C:”. In my case, I chose to put it on “M:”.

Gurus of a Disturbing Kind

Perhaps it has got something to do with my commie roots, but I am a skeptic, especially when it comes to the “godmen” of India. I cannot understand how they can inspire such blind belief. Where the believers see miracles, I see sleight of hand. When they hear pearls of wisdom, I can hear only gibberish. And when the new age masters claim to be in deep meditation, I cannot help but suspect that they are just dozing off.

Although my skepticism renders me susceptible to seeing the darker side of these modern day saints, I do have a counterbalancing respect for our heritage and culture, and the associated wisdom and knowledge. It is always with thrill of awe and pride that I listen to Swami Vivekananda’s century-old Chicago speeches, for instance.

The speeches of the modern yogis, on the other hand, fill me with bewilderment and amused confusion. And when I hear of their billion dollar stashes, bevies of Rolls-Royces, and claims of divinity, I balk. When I see the yogis and their entourage jet-setting in first class to exotic holiday destinations with the money extracted in the name of thinly disguised charities, I feel a bit outraged. Still, I am all for live-and-let-live. If there are willing suckers eager to part with their dough and sponsor their guru’s lifestyle, it is their lookout. After all, there are those who financed Madoffs and Stanfords of the greedy era we live in, where fraud is a sin only when discovered.

Now I wonder if it is time that the skeptics among us started speaking out. I feel that the spiritual frauds are of a particularly disturbing kind. Whether we see it that way or not, we are all trying to find a purpose and meaning to our existence on this planet through our various pursuits. We may find the elusive purpose in fame, glory, money, charity, philanthropy, knowledge, wisdom and in any of the hundreds of paths. All these pursuits have their associated perils of excess. If you get greedy, for instance, there is always a Madoff waiting in the wings to rip you off. If you become too charitable, there are other characters eager to separate you from your money, as my Singaporean readers will understand.

Of all these pursuits, spirituality is of a special kind; it is a shortcut. It gives you a direct path to a sense of belonging, and a higher purpose right away. Smelling blood in the carefully cultivated need for spirituality (whatever spirituality means), the yogis and maharishis of our time have started packaging and selling instant nirvana in neat three or five day courses that fit your schedule, while demanding vast sums of “not-for-profit” money. Even this duplicity would be fine by me. Who am I to sit in judgment of people throwing money at their inner needs, and gurus picking it up? But, of late, I am beginning to feel that I should try to spread a bit of rationality around.

I decided to come of out my passive mode for two reasons. One is that the gurus engage their victims in their subtle multi-level marketing schemes, ensnaring more victims. A pupil today is a teacher tomorrow, fueling an explosive growth of self-serving organizations. The second reason is that the gurus demand that the followers donate their time. I think the victims do not appreciate the enormity of this unfair demand. You see, you have only a limited time to live, to do whatever it is that you think will lead to fulfillment. Don’t spend it on wrong pursuits because there is always something that you are sacrificing in the process, be it your quality time with your loved ones, opportunity to learn or travel, or enjoy life or whatever. Time is a scarce resource, and you have to spend it wisely, or you will regret it more than anything else in life.

So don’t be blind. Don’t mistake group dynamics for salvation. Or charisma for integrity. Or obscurity for wisdom. If you do, the latter day gurus, masters of manipulation that they are, will take you for a ride. A long and unpleasant one.

Photo by jeffreyw cc

A New Kind of Binomial Tree

We can port even more complicated problems from mathematics directly to a functional language. For an example closer to home, let us consider a binomial pricing model, illustrating that the ease and elegance with which Haskell handles factorial do indeed extend to real-life quantitative finance problems as well.

The binomial tree pricing model works by assuming that the price of an underlying asset can only move up or down by constant factors u and d during a small time interval \delta t. Stringing together many such time intervals, we make up the expiration time of the derivative instrument we are trying to price. The derivative defined as a function of the price of the underlying at any point in time.

Figure 1
Figure 1. Binomial tree pricing model. On the X axis, labeled i, we have the time steps. The Y axis represents the price of the underlying, labeled j. The only difference from the standard binomial tree is that we have let j be both positive and negative, which is mathematically natural, and hence simplifies the notation in a functional language.

We can visualize the binomial tree as shown in Fig. 1. At time t = 0, we have the asset price S(0) = S_0. At t = \delta t (with the maturity T = N\delta t). we have two possible asset values S_0 u and S_0 d = S_0 / u, where we have chosen d = 1/u. In general, at time i\delta t, at the asset price node level j, we have

S_{ij} = S_0 u^j

By choosing the sizes of the up and down price movements the same, we have created a recombinant binomial tree, which is why we have only 2i+1 price nodes at any time step i\delta t. In order to price the derivative, we have to assign risk-neutral probabilities to the up and down price movements. The risk-neutral probability for an upward movement of u is denoted by p. With these notations, we can write down the fair value of an American call option (of expiry T, underlying asset price S_0, strike price K, risk free interest rate r, asset price volatility \sigma and number of time steps in the binomial tree N) using the binomial tree pricing model as follows:

\textrm{OptionPrice}(T, S_0, K, r, \sigma, N) = f_{00}

where f_{ij} denotes the fair value of the option at any the node i in time and j in price (referring to Fig. 1).

f_{ij} = \left{\begin{array}{ll}\textrm{Max}(S_{ij} - K, 0) & \textrm{if } i = N \\textrm{Max}(S_{ij} - 0, e^{-\delta tr}\left(p f_{i+1, j+1} + (1-p)  f_{i+1, j-1}\right)) & \textrm {otherwise}\end{array}\right

At maturity, i = N and i\delta t = T, where we exercise the option if it is in the money, which is what the first Max function denotes. The last term in the express above represents the risk neutral backward propagation of the option price from the time layer at (i+1)\delta t to i\delta t. At each node, if the option price is less than the intrinsic value, we exercise the option, which is the second Max function.

The common choice for the upward price movement depends on the volatility of the underlying asset. u = e^{\sigma\sqrt{\delta t}} and the downward movement is chosen to be the same d = 1/u to ensure that we have a recombinant tree. For risk neutrality, we have the probability defined as:

p = \frac{ e^{r\delta t} - d}{u - d}

For the purpose of illustrating how it translates to the functional programming language of Haskell, let us put all these equations together once more.

\textrm{OptionPrice}(T, S_0, K, r, \sigma, N) = f_{00}
where
&f_{ij}  =& \left\{\begin{array}{ll}\textrm{Max}(S_{ij} - K, 0) & \textrm{if } i = N \\\textrm{Max}(S_{ij} - 0, e^{-\delta tr}\left(p f_{i+1\, j+1} + (1-p)  f_{i+1\, j-1}\right)\quad \quad& \textrm{otherwise}\end{array}\right.
S_{ij}  = S_0 u^j
u = e^{\sigma\sqrt{\delta t}}
d  = 1/u
\delta t  = T/N
p  = \frac{ e^{r\delta t} - d}{u - d}

Now, let us look at the code in Haskell.

optionPrice t s0 k r sigma n = f 0 0
    where
      f i j =
          if i == n
          then max ((s i j) - k) 0
          else max ((s i j) - k)
                    (exp(-r*dt) * (p * f(i+1)(j+1) +
                    (1-p) * f(i+1)(j-1)))
      s i j = s0 * u**j
      u = exp(sigma * sqrt dt)
      d = 1 / u
      dt = t / n
      p = (exp(r*dt)-d) / (u-d)

As we can see, it is a near-verbatim rendition of the mathematical statements, nothing more. This code snippet actually runs as it is, and produces the result.

*Main> optionPrice 1 100 110 0.05 0.3 20
10.10369526959085

Looking at the remarkable similarity between the mathematical equations and the code in Haskell, we can understand why mathematicians love the idea of functional programming. This particular implementation of the binomial pricing model may not be the most computationally efficient, but it certainly is one of great elegance and brevity.

While a functional programming language may not be appropriate for a full-fledged implementation of a trading platform, many of its underlying principles, such as type abstractions and strict purity, may prove invaluable in programs we use in quantitative finance where heavy mathematics and number crunching are involved. The mathematical rigor enables us to employ complex functional manipulations at the program level. The religious adherence to the notion of statelessness in functional programming has another great benefit. It helps in parallel and grid enabling the computations with almost no extra work.

Sections

Functional Programming

Functional programming is the programming methodology that puts great emphasis on statelessness and religiously avoids side effects of one function in the evaluation any other function. Functions in this methodology are like mathematical functions. The conventional programming style, on the other hand, is considered “imperative” and uses states and their changes for accomplishing computing tasks.

Adapting this notion of functional programming may sound like regressing back to the pre-object-oriented age, and sacrificing all the advantages thereof. But there are practitioners, both in academia and in the industry, who strongly believe that functional languages are the only approach that ensures stability and robustness in financial and number crunching applications.

Functional languages, by definition, are stateless. They do everything through functions, which return results that are, well, functions of their arguments. This statelessness immediately makes the functions behave like their mathematical counterparts. Similarly, in a functional language, variable behave like mathematical variables rather than labels for memory locations. And a statement like x = x + 1 would make no sense. After all, it makes no sense in real life either.

This strong mathematical underpinning makes functional programming the darling of mathematicians. A piece of code written in a functional programming language is a set of declarations quite unlike a standard computer language such as C or C++, where the code represents a series of instructions for the computer. In other words, a functional language is declarative — its statements are mathematical declarations of facts and relationships, which is another reason why a statement like x = x + 1 would be illegal.

The declarative nature of the language makes it “lazy,” meaning that it computes a result only when we ask for it. (At least, that is the principle. In real life, full computational laziness may be difficult to achieve.) Computational laziness makes a functional programming language capable of handling many situations that would be impossible or exceedingly difficult for procedural languages. Users of Mathematica, which is a functional language for symbolic manipulation of mathematical equations, would immediately appreciate the advantages of computational laziness and other functional features such as its declarative nature. In Mathematica, we can carry out an operation like solving an equation for instance. Once that is done, we can add a few more constraints at the bottom of our notebook, scroll up to the command to solve the original equation and re-execute it, fully expecting the later constraints to be respected. They will be, because a statement appearing at a later part in the program listing is not some instruction to be carried out at a later point in a sequence. It is merely a mathematical declaration of truism, no matter where it appears.

This affinity of functional languages toward mathematics may appeal to quants as well, who are, after all, mathematicians of the applied kind. To see where the appeal stems from, let us consider a simple example of computing the factorial of an integer. In C or C++, we can write a factorial function either using a loop or making use of recursion. In a functional language, on the other hand, we merely restate the mathematical definition, using the syntax of the language we are working with. In mathematics, we define factorial as:

n! = left{begin{array}{ll}1 & n=1 \n times (n-1)! & textrm{Otherwise}end{array}right.

And in Haskell (a well known functional programming language), we can write:

bang 1 = 1
bang n = n * bang (n-1)

And expect to make the call bang 12 to get the factorial of 12.

This example may look artificially simple. But we can port even more complicated problems from mathematics directly to a functional language. For an example closer to home, let us consider a binomial pricing model, illustrating that the ease and elegance with which Haskell handles factorial do indeed extend to real-life quantitative finance problems as well.

Sections

Magic of Object Oriented Languages

Nowhere is the dominance of paradigms more obvious than in object oriented languages. Just take a look at the words that we use to describe some their features: polymorphism, inheritance, virtual, abstract, overloading — all of them normal (or near-normal) everyday words, but signifying notions and concepts quite far from their literal meaning. Yet, and here is the rub, their meaning in the computing context seems exquisitely appropriate. Is it a sign that we have taken these paradigms too far? Perhaps. After all, the “object” in object oriented programming is already an abstract paradigm, having nothing to do with “That Obscure Object of Desire,” for instance.

We do see the abstraction process running a bit wild in design patterns. When a pattern calls itself a visitor or a factory, it takes a geekily forgiving heart to grant the poetic license silently usurped. Design patterns, despite the liberties they take with our sensitivities, add enormous power to object oriented programming, which is already very powerful, with all the built in features like polymorphism, inheritance, overloading etc.

To someone with an exclusive background in sequential programming, all these features of object oriented languages may seem like pure magic. But most of the features are really extensions or variations on their sequential programming equivalents. A class is merely a structure, and can even be declared as such in C++. When you add a method in a class, you can imagine that the compiler is secretly adding a global function with an extra argument (the reference to the object) and a unique identifier (say, a hash value of the class name). Polymorphic functions also can be implemented by adding a hash value of the function signature to the function names, and putting them in the global scope.

The real value of the object oriented methodology is that it encourages good design. But good programming discipline goes beyond mere adaptation of an object oriented language, which is why my first C++ teacher said, “You can write bad Fortran in C++ if you really want. Just that you have to work a little harder to do it.”

For all their magical powers, the object oriented programming languages all suffer from some common weaknesses. One of their major disadvantages is, in fact, one of the basic design features of object oriented programming. Objects are memory locations containing data as laid down by the programmer (and the computer). Memory locations remember the state of the object — by design. What state an object is in determines what it does when a method is invoked. So object oriented approach is inherently stateful, if we can agree on what “state” means in the object oriented context.

But in a user interface, where we do not have much control over the sequence in which various steps are executed, we might get erroneous results in stateful programming depending on what step gets executed at a what point in time. Such considerations are especially important when we work with parallel computers in complex situations. One desirable property in such cases is that the functions return a number solely based on their arguments. This property, termed “purity,” is the basic design goal of most functional languages, although their architects will concede that most of them are not strictly “pure.”

Sections

Paradigms All the Way

Paradigms permeate almost all aspects of computing. Some of these paradigms are natural. For instance, it is natural to talk about an image or a song when we actually mean a JPEG or an MP3 file. File is already an abstraction evolved in the file-folder paradigm popularized in Windows systems. The underlying objects or streams are again abstractions for patterns of ones and zeros, which represent voltage levels in transistors, or spin states on a magnetic disk. There is an endless hierarchy of paradigms. Like the proverbial turtles that confounded Bertrand Russell (or was it Samuel Johnson?), it is paradigms all the way down.

Some paradigms have faded into the background although the terminology evolved from them lingers. The original paradigm for computer networks (and of the Internet) was a mesh of interconnections residing in the sky above. This view is more or less replaced by the World Wide Web residing on the ground at our level. But we still use the original paradigm whenever we say “download” or “upload.” The World Wide Web, by the way, is represented by the acronym WWW that figures in the name of all web sites. It is an acronym with the dubious distinction of being about the only one that takes us longer to say than what it stands for. But, getting back to our topic, paradigms are powerful and useful means to guide our interactions with unfamiliar systems and environments, especially in computers, which are strange and complicated beasts to begin with.

A basic computer processor is deceptively simple. It is a string of gates. A gate is a switch (more or less) made up of a small group of transistors. A 32 bit processor has 32 switches in an array. Each switch can be either off representing a zero, or on (one). And a processor can do only one function — add the contents of another array of gates (called a register) to itself. In other words, it can only “accumulate.”

In writing this last sentence, I have already started a process of abstraction. I wrote “contents,” thinking of the register as a container holding numbers. It is the power of multiple levels of abstraction, each of which is simple and obvious, but building on whatever comes before it, that makes a computer enormously powerful.

We can see abstractions, followed by the modularization of the abstracted concept, in every aspect of computing, both hardware and software. Groups of transistors become arrays of gates, and then processors, registers, cache or memory. Accumulations (additions) become all arithmetic operations, string manipulations, user interfaces, image and video editing and so on.

Another feature of computing that aids in the seemingly endless march of the Moore’s Law (which states that computers will double in their power every 18 months) is that each advance seems to fuel further advances, generating an explosive growth. The first compiler, for instance, was written in the primitive assembler level language. The second one was written using the first one and so on. Even in hardware development, one generation of computers become the tools in designing the next generation, stoking a seemingly inexorable cycle of development.

While this positive feedback in hardware and software is a good thing, the explosive nature of growth may take us in wrong directions, much like the strong grown in the credit market led to the banking collapses of 2008. Many computing experts now wonder whether the object oriented technology has been overplayed.

Sections

Zeros and Ones

Computers are notorious for their infuriatingly literal obedience. I am sure anyone who has ever worked with a computer has come across the lack of empathy on its part — it follows our instructions to the dot, yet ends up accomplishing something altogether different from what we intend. We have all been bitten in the rear end by this literal adherence to logic at the expense of commonsense. We can attribute at least some of the blame to our lack of understanding (yes, literal and complete understanding) of the paradigms used in computing.

Rich in paradigms, the field of computing has a strong influence in the way we think and view the world. If you don’t believe me, just look at the way we learn things these days. Do we learn anything now, or do we merely learn how to access information through browsing and searching? Even our arithmetic abilities have eroded along with the advent of calculators and spreadsheets. I remember the legends of great minds like Enrico Fermi, who estimated the power output of the first nuclear blast by floating a few pieces of scrap paper, and like Richard Feynman, who beat an abacus expert by doing binomial expansion. I wonder if the Fermis and Feynmans of our age would be able to pull those stunts without pulling out their pocket calculators.

Procedural programming, through its unwarranted reuse of mathematical symbols and patterns, has shaped the way we interact with our computers. The paradigm that has evolved is distinctly unmathematical. Functional programming represents a counter attack, a campaign to win our minds back from the damaging influences of the mathematical monstrosities of procedural languages. The success of this battle may depend more on might and momentum rather than truth and beauty. In our neck of the woods, this statement translates to a simple question: Can we find enough developers who can do functional programming? Or is it cheaper and more efficient to stick to procedural and object oriented methodologies?

Sections

Change the Facts

There is beauty in truth, and truth in beauty. Where does this link between truth and beauty come from? Of course, beauty is subjective, and truth is objective — or so we are told. It may be that we have evolved in accordance with the beautiful Darwinian principles to see perfection in absolute truth.

The beauty and perfection I’m thinking about are of a different kind — those of ideas and concepts. At times, you may get an idea so perfect and beautiful that you know it has to be true. This conviction of truth arising from beauty may be what made Einstein declare:

But this conviction about the veracity of a theory based on its perfection is hardly enough. Einstein’s genius really is in his philosophical tenacity, his willingness to push the idea beyond what is considered logical.

Let’s take an example. Let’s say you are in a cruising airplane. If you close the windows and somehow block out the engine noise, it will be impossible for you to tell whether you are moving or not. This inability, when translated to physics jargon, becomes a principle stating, “Physical laws are independent of the state of motion of the experimental system.”

The physical laws Einstein chose to look at were Maxwell’s equations of electromagnetism, which had the speed of light appearing in them. For them to be independent of (or covariant with, to be more precise) motion, Einstein postulated that the speed of light had to be a constant regardless of whether you were going toward it or away from it.

Now, I don’t know if you find that postulate particularly beautiful. But Einstein did, and decided to push it through all its illogical consequences. For it to be true, space has to contract and time had to dilate, and nothing could go faster than light. Einstein said, well, so be it. That is the philosophical conviction and tenacity that I wanted to talk about — the kind that gave us Special Relativity about a one hundred years ago.

Want to get to General Relativity from here? Simple, just find another beautiful truth. Here is one… If you have gone to Magic Mountain, you would know that you are weightless during a free fall (best tried on an empty stomach). Free fall is acceleration at 9.8 m/s/s (or 32 ft/s/s), and it nullifies gravity. So gravity is the same as acceleration — voila, another beautiful principle.

World line of airplanesIn order to make use of this principle, Einstein perhaps thought of it in pictures. What does acceleration mean? It is how fast the speed of something is changing. And what is speed? Think of something moving in a straight line — our cruising airplane, for instance, and call the line of flight the X-axis. We can visualize its speed by thinking of a time T-axis at right angles with the X-axis so that at time = 0, the airplane is at x = 0. At time t, it is at a point x = v.t, if it is moving with a speed v. So a line in the X-T plane (called the world line) represents the motion of the airplane. A faster airplane would have a shallower world line. An accelerating airplane, therefore, will have a curved world line, running from the slow world line to the fast one.

So acceleration is curvature in space-time. And so is gravity, being nothing but acceleration. (I can see my physicist friends cringe a bit, but it is essentially true — just that you straighten the world-line calling it a geodesic and attribute the curvature to space-time instead.)

The exact nature of the curvature and how to compute it, though beautiful in their own right, are mere details, as Einstein himself would have put it. After all, he wanted to know God’s thoughts, not the details.

Of Dreams and Memories

I recently watched The Diving Bell and the Butterfly (Le scaphandre et le papillon), which describes the tragic plight of the French journalist Jean-Dominique Bauby, who suffered a severe stroke and became “locked-in.” During my research days, I had worked a bit on rehabilitation systems for such locked-in patients, who have normal or near-normal cognitive activities but no motor control. In other words, their fully functional minds are locked in a useless body that affords them no means of communication with the external world. It is the solitary confinement of the highest order.

Locked-in condition is one of my secret fears; not so much for myself, but that someone close to me might have to go through it. My father suffered a stroke and was comatose for a month before he passed away, and I will always wonder whether he was locked-in. Did he feel pain and fear? So I Googled a bit to find out if stroke patients were conscious inside. I couldn’t find anything definitive. Then it occurred to me that perhaps these stroke patients were conscious, but didn’t remember it later on.

That thought brought me to one of my philosophical musings. What does it mean to say that something happened if you cannot remember it? Let’s say you had to go through a lot of pain for whatever reason. But you don’t remember it later. Did you really suffer? It is like a dream that you cannot remember. Did you really dream it?

Memory is an essential ingredient of reality, and of existence — which is probably why they can sell so many digital cameras and camcorders. When memories of good times fade in our busy minds, perhaps we feel bits of our existence melting away. So we take thousands of pictures and videos that we are too busy to look at later on.

But I wonder. When I die, my memories will die with me. Sure, those who are close to me will remember me for a while, but the memories that I hold on to right now, the things I have seen and experienced, will all disappear — like an uncertain dream that someone (perhaps a butterfly) dreamt and forgot. So what does it mean to say that I exist? Isn’t it all a dream?

Stinker Emails — A Primer

Email has revolutionized corporate communication in the last decade. Most of its impact has been positive. An email from the big boss to all@yourcompany, for instance, is a fair substitute for a general communication meeting. In smaller teams, email often saves meetings and increases productivity.

When compared to other modes of communication (telephone, voice mail etc.), email has a number of characteristics that make it particularly suited for corporate communication. It gives the sender the right amount of distance from the recipient to feel safe behind the keyboard. The sender gets enough time to polish the language and presentation. He has the option of sending the email multiple recipients at once. The net effect of these characteristics is that a normally timid soul may become a formidable email persona.

A normally aggressive soul, on the other hand, may become an obnoxious sender of what are known as stinkers. Stinkers are emails that are meant to inflict humiliation.

Given the importance of email communication these days, you may find yourself seduced by the dark allure of stinkers. If you do, here are the first steps in mastering the art of crafting a stinker. The trick is to develop a holier-than-thou attitude and assume a moral high ground. For instance, suppose you are upset with a team for their shoddy work, and want to highlight the fact to them (and to a few key persons in the organization, of course). A novice may be tempted to write something like, “You and your team don’t know squat.” Resist that temptation, and hold that rookie email. Far more satisfying is to compose it as, “I will be happy to sit down with you and your team and share our expertise.” This craftier composition also subtly shows off your superior knowledge.

Emails can be even more subtle. For instance, you can sweetly counsel your boss regarding some issue as, “No point in rushing in where angels fear to tread,” and have the secret pleasure that you managed to call him a fool to his face!

Counter stinkers are doubly sweet. While engaging in an email duel, your best hope is to discover a factual inaccuracy in the stinker. Although you are honor-bound to respond to a stinker, silence also can be an effective response. It sends a signal that you either found the stinker too unimportant to respond to, or, worse, you accidentally deleted it without reading it.

Beware of stinker traps. You may get an email inviting you to work on a problem with a generous offer to help. Say you take the bait and request help. The next email (copied to practically everybody on earth) may read something like, “If you bothered to read the previous message,” (referring to an email sent ten days ago to 17 others and two email groups) “you would know that…” Note how easy it is to imply that you don’t know what you are supposed to, and that you are in the habit of ignoring important messages.

We have no sure defense against stinker traps other than knowing the sender. If a sender is known for his stinker-happy disposition, treat all his sweet overtures with suspicion. It is unlikely that he has had a change of heart and decided to treat you civilly. Much more likely is that he is setting you up for something that he will enjoy rather more than you!

At the end of the day, don’t worry too much about stinkers if you do find yourself at the receiving end. Keep a smile on your face and recognize the stinkers for what they are — ego trips.

If you enjoyed this post, I’m sure you will also like:

  1. An Office Survival Guide
  2. La Sophistication