Functional programming is the programming methodology that puts great emphasis on statelessness and religiously avoids side effects of one function in the evaluation any other function. Functions in this methodology are like mathematical functions. The conventional programming style, on the other hand, is considered “imperative” and uses states and their changes for accomplishing computing tasks.
Adapting this notion of functional programming may sound like regressing back to the pre-object-oriented age, and sacrificing all the advantages thereof. But there are practitioners, both in academia and in the industry, who strongly believe that functional languages are the only approach that ensures stability and robustness in financial and number crunching applications.
Functional languages, by definition, are stateless. They do everything through functions, which return results that are, well, functions of their arguments. This statelessness immediately makes the functions behave like their mathematical counterparts. Similarly, in a functional language, variable behave like mathematical variables rather than labels for memory locations. And a statement like x = x + 1 would make no sense. After all, it makes no sense in real life either.
This strong mathematical underpinning makes functional programming the darling of mathematicians. A piece of code written in a functional programming language is a set of declarations quite unlike a standard computer language such as C or C++, where the code represents a series of instructions for the computer. In other words, a functional language is declarative — its statements are mathematical declarations of facts and relationships, which is another reason why a statement like x = x + 1 would be illegal.
The declarative nature of the language makes it “lazy,” meaning that it computes a result only when we ask for it. (At least, that is the principle. In real life, full computational laziness may be difficult to achieve.) Computational laziness makes a functional programming language capable of handling many situations that would be impossible or exceedingly difficult for procedural languages. Users of Mathematica, which is a functional language for symbolic manipulation of mathematical equations, would immediately appreciate the advantages of computational laziness and other functional features such as its declarative nature. In Mathematica, we can carry out an operation like solving an equation for instance. Once that is done, we can add a few more constraints at the bottom of our notebook, scroll up to the command to solve the original equation and re-execute it, fully expecting the later constraints to be respected. They will be, because a statement appearing at a later part in the program listing is not some instruction to be carried out at a later point in a sequence. It is merely a mathematical declaration of truism, no matter where it appears.
This affinity of functional languages toward mathematics may appeal to quants as well, who are, after all, mathematicians of the applied kind. To see where the appeal stems from, let us consider a simple example of computing the factorial of an integer. In C or C++, we can write a factorial function either using a loop or making use of recursion. In a functional language, on the other hand, we merely restate the mathematical definition, using the syntax of the language we are working with. In mathematics, we define factorial as:
And in Haskell (a well known functional programming language), we can write:
bang 1 = 1 bang n = n * bang (n-1)
And expect to make the call bang 12 to get the factorial of 12.
This example may look artificially simple. But we can port even more complicated problems from mathematics directly to a functional language. For an example closer to home, let us consider a binomial pricing model, illustrating that the ease and elegance with which Haskell handles factorial do indeed extend to real-life quantitative finance problems as well.