Functional Programming vs Object-Oriented Programming Chapter 1 — An Historical Intro and “What is Functional Programming anyhow?”

Matthew Hayden
7 min readOct 17, 2021

--

This is a practical guide to Functional Programming compared to Object-Oriented and Imperative Programming. It’s based on my 13 years of experience as a working programmer where I have used both Object-Oriented and Functional Programming languages.

This is about a 6–7 minute read if you are already familiar with the concepts of Functional Programming. For the uninitiated, I recommend you take a little more time.

First, we’ll delve into the history of Functional Programming and Object-Oriented Programming and then we’ll try briefly to answer the question “What is Functional Programming?”.

The very different histories of Functional Programming and OOP

Turing’s Machine

It’s useful to know a little history to understand where Functional Programming and Object-Oriented Programming came from.

The idea of an automatic computer was first dreamt up in England by a very talented mathematician Alan Turing during the intense fighting of the Second World War.

With the menace of invasion looming, Turing was investigating machines that could be used to break Enigma, the German Army, Air, and Naval secret messaging code system that was one of the most sophisticated of its time, and proved almost unbreakable by hand.

The computers of the day were just people, and people have needs, they get tired, they need breaks, and also importantly they make mistakes. Turing set out to create an automatic computer, a machine that could follow instructions perfectly without ever stopping, all in a fraction of the time a person could. This automatic computer is now known as a Turing Machine.

Turing’s Machines are a model of computation. This model has several parts to it working. The first is a tape that is infinitely long, divided into cells. There is a head that reads and writes symbols to these cells, which moves up and down freely one cell at a time. The final part is a finite state machine which requires a little more description.

In short, the state machine is a list of rules, unique to each Turing Machine, that describes how to move the head up and down the tape, reading and writing symbols along the way. These machines each with their list of rules are what characterise the computers of today and the essence of what a program is, a series of instructions, running on an electrical machine.

Enter Von Neumann

Computers that we are more familiar with today have their history in the United States, developed most notably by the Hungarian mathematician John Von Neumann. His design, the Von Neumann Architecture, is the foundation of all modern computers.

The Von Neumann Architecture describes a central processing unit or CPU (sometimes divided into an arithmetic-and-logic unit and a control unit) that receives both instructions and data from storage, often named memory. The instructions tell the CPU how to manipulate data, and read and write it to this memory.

From this simple scheme, a family of computer programming languages has evolved. They are the Imperative Programming Languages, from which Object-Oriented Programming was born. CPU instructions became expressions and statements, memory addresses became variables and pointers, loops and subroutines evolved from simpler instructions that jumped the CPU’s attention around its memory to new or old instructions.

The Lambda Calculus

At the same time as Turing was developing his Turing Machines an American colleague, named Alonzo Church, was developing his own model of computation. As it turned out, Church’s Lambda Calculus was equivalent to Turing’s machines.

What is the Lambda Calculus though? It’s a very simple set of rules that is concerned only with applying mathematical functions to their arguments, building up more complex functions by combination.

What is Functional Programming?

Functions are the focus of Functional Programming

A function is a factory. Raw materials go in, are transformed through some process, and exit as something new. This is a slightly misleading metaphor as we’ll see later but it does the job for now. If you’re familiar with set theory, a function maps members of one set onto a member of another or the same set.

Numbers are transformed into other numbers. Functions are transformed into other functions. Numbers can become functions or vice versa. We’re not restricted by the type of values we’re transforming or the types of values we can create, nor are we restricted by the number of values we can transform or create.

Here’s the definition of a simple function that always produces the same value regardless of what you put into it.

alwaysFive y = 5

This is also the notation that we are going to use for our function definitions.

First on the left in the definition is the name of our function. It’s useful to give our functions names, so we can refer to them later. Anything after the name and before the equals sign is what we will call an argument. They are actually argument names and stand-in for the values that we will substitute them for when we apply our function. Functions can take one argument, or two, or as many arguments as needed. Our arguments and function names are always words starting with lowercase letters, they can even be single letters. They can be English words or made-up words like quuuuux or complete nonsense like f4nn13.

Everything on the right of the equals is the function body. This is the process that we should follow to obtain the result of our function for the arguments we have applied it to. Here the process simply always returns the number five.

Let’s try our simple function out by applying it to a number. In other words, let’s evaluate our function when it’s applied to a particular value.

alwaysFive 2 becomes 5.

Our alwaysFive the function will always evaluate to 5 for any given argument. This is easy to see as it does not refer to the only argument y.

Purity and Mutation

Functions are pure. Purity, in this case, means that the definition of a function is entirely in terms of functions, values and its arguments. Nothing outside of the definition of a function can influence the return value other than other pure functions. Also, there is nothing outside of the function that it can influence. It can’t set any variables, change a memory address, fire missiles, or print characters to the screen. This is what makes it pure.

Why is it useful to be pure? It isn’t as a means to any ultimate ends, but the property of purity makes working with functions easy, you have only the function body to read to understand how a function works. This also makes functions easy to test as they have no environment that is required to exist around them.

Immutability is also a property of values in functional programming. This means that a value cannot be changed or mutated, only copied and transformed into a new value. This is why our analogy of a factory was in fact a bad one, nothing is being lost.

You will have used mutation before using assignment statements and other operators in Object-Oriented or Imperative languages. There are immutable values in Object-Oriented languages even, such as Java’sStrings. However Java's primitives are not immutable by default, and most objects you create will likely be mutable, that is, changeable somehow.

Immutability adds another layer of predictability to a function’s definition. If a function argument cannot be changed, then we can always rely on it being the same value, within the function and after it has been passed to it. This combination of purity and immutability is sometimes referred to as safety.

Getting thing done

Doing something useful on an actual computer requires us to be able to change something, or mutate it. It could mean writing a file to a file system, changing some pixels on a screen or sending packets to a network interface and so on. Functions on their own cannot do this and functional programs require something else to achieve this which we’ll get into in later chapters of this guide.

Imperative, and therefore OOP languages do not have the guarantee of purity for their procedures and object methods. If you want purity in your object methods you have to carefully ensure this behaviour yourself. On the other side of the coin, if you wish to be able to change something in a functional language, you have to put more effort in. You cannot simply call print.

Functional programming languages often aim to make a clear distinction between pure functions and imperative code that does things. Sometimes even Object Oriented code does this.

Referential Transparency

Both alwaysFive 2 and 5 are expressions. Expressions are distinct from function definitions. Expressions make up the body of a function definition. Values like 5 and 2 are expressions. More complicated expressions can be created by applying functions to values, like in alwaysFive 2. Expressions can be passed to functions as arguments. The process of evaluation is determining the value of an expression. In this case, the two expressions have the same value, 5.

We can, in fact, substitute the value 5 wherever we see the expressions alwaysFive 2. This is called referential transparency.

As an aside, we can also memoize a very time-consuming function evaluation so subsequent evaluations aren’t spent wastefully recomputing. This is very easy with functions because of this property of referential transparency, that is, all subsequent calls to a function with the same values can be substituted for the resulting value.

Conclusion

That’s the end of our primer on the history and theory of functional programming contrasted to Object-Oriented Programming.

In Chapter 2 we’ll focus on how functional programming handles repetition and data versus how they are typically approached in OOP.

Be sure to follow me and be notified of the next chapter.

--

--

Matthew Hayden
Matthew Hayden

Written by Matthew Hayden

Professional Software Developer

No responses yet