A Category Primer

Category theory formalizes mathematical structure. The term ‘theory’ is a misnomer, as there isn’t any hypothesis that’s subject to disconfirmation. Rather, you could think of category theory as a formalism or ‘language’ for expressing the abstract qualities of structures and relationships. It introduces a number of terms and definitions, which I like to call the ‘category zoo’ (which I will explore in a future post).

Wikipedia says, “Category theory has practical applications in programming language theory, for example the usage of monads in functional programming.” I find that statement to be a stretch: while one might find it convenient to express ideas in terms of concepts introduced in category theory, I believe these ideas were perfectly well-articulated even without. My goal then, isn’t to evangelize any practical benefit to understanding category theory, simply to explain what it is.

The Basics

A category is a collection of objects and arrows (aka morphisms) that obey certain rules. Multiple collections may include the same objects and arrows; as long as these rules are obeyed within a collection, the collection is said to form a category.

  1. Every collection has zero or more objects and arrows.
  2. Every arrow has a source object and a target object.
  3. Every collection is closed under composition.
  4. Composition is associative.
  5. Every object has a unique identity arrow.

An object is a simple beast, serving simply as the start or end of arrows. It has no further structure to it, and cannot be ‘opened up’. Instead, an object can be defined only in terms of its relationship with other objects, represented by arrows. Multiple arrows may exist between any two (even the same) object. Each of these arrows is distinct. Arrows are given different names to distinguish them from each other.

A category is closed under composition, which means that if you have an arrow f from A to B, and an arrow g from B to C, there must exist, within the collection, another arrow g \circ{} f (pronounced g after f) from A to C which is the composition of the two. Note that there could be many arrows from A to C, but only one of them is g \circ{} f.

Composition is associative, which means that if you have an arrow f from A to B, an arrow g from B to C, and an arrow h from C to D, then we have:

h \circ{} (g \circ{} f) = (h \circ{} g) \circ{} f

In the diagram below, when you take a path from A to D, it doesn’t matter if you take the blue path or the red one, as they’re both equivalent.

An identity arrow of an object X is a unique arrow id_X whose source and target are the object itself, with the additional property that for any arrow f from A to B, we have:

id_A \circ{} f = f = f \circ{} id_B

Notice that these conditions must hold true for any arrow f, not just a particular one. In the diagram below, the blue and red paths are equivalent for any arrow f.

An isomorphism is an invertible morphism (arrow). Given arrows f and f' from A to B and B to A respectively, they form an isomorphism if the following statements hold true.

\begin{aligned}
f' \circ f &= id_A \\
f \circ f' &= id_B \\
\end{aligned}

f and f' are inverses of each other. The inverse of an arrow, if it exists, is unique. Two objects A and B are isomorphic if there exists an isomorphism between them, noting in passing that there may be more than one such isomorphism.

An initial object has exactly one arrow to every object in the category (including itself), whereas a terminal object has exactly one arrow from every object. If the source and target are the object itself, then the arrow must be the object’s identity arrow. A category may have any number of initial or terminal objects, or it may have none at all. Initial and terminal objects are duals of each other, identical concepts but with the direction of arrows reversed. In the diagram below, A and F are initial objects, whereas C is a terminal object.

The initial and terminal properties are called universal properties, and an object that satisfies either of these properties is called a universal object. If there are multiple initial objects in a category such as A and F above, then these objects must be isomorphic. Similarly, if there are multiple terminal objects in a category, then they must be isomorphic as well. We say that initial and final objects are ‘unique up to isomorphism’ to express this idea that while the objects may not be identical, there is an isomorphism between them.

Interlude: Category Set

The category of sets, denoted by Set, is one whose objects are sets, and morphisms are total functions from source to target. In this category, each set is nothing more than an object serving as the source or target of one or more morphisms; a categorical description abstracts over the elements contained within any set, leaving behind only how sets relate to each other.

In the category of sets, the empty set is an initial object, as there is exactly one unique total function \mathcal{F_{\varnothing \rightarrow S}} from the empty set to every other set \mathcal{S}. The intuition for this is as follows: since the empty set \varnothing has no elements, it is logically true that if an element from the set were to be provided as input, then it could be uniquely mapped to any of the available sets, including itself. Alas, the empty set is incapable of providing any elements! Notice the parallels with propositional logic, that states that the following is true for any B.

\lnot A \rightarrow (A \rightarrow B)

Further, there is exactly one unique total function \mathcal{G_{S \rightarrow U}} from every set \mathcal{S} to every singleton set \mathcal{U} (a set with exactly one element), as there is exactly one way to construct such a set. Every singleton set is therefore a terminal object. A function mapping every set to a given singleton set is what is conventionally called a constant function — no matter what input you offer, you get the same output, as there is no other alternative.

In the category of sets, as is true for any category, both initial and terminal objects are unique up to isomorphism. Notice that there is exactly one empty set (initial object) and infinitely many singleton sets (terminal objects). All singleton sets are isomorphic, but the empty set is special because it is also unique (and therefore trivially isomorphic to itself through id_\varnothing).

In the diagram below showing a part of the category of sets, \varnothing is the empty set, whereas U_1, U_2 and U_3 are singletons.

Universal Construction

Universal construction is a method of leveraging universal properties to construct new objects. We can observe this method in action in the context of defining products and coproducts.

Product

In the context of sets, a product (or more formally, the Cartesian product) of two sets A and B is A \times B, where:

A \times B = \{(a, b) \mid a \in A, b \in B\}

In category theory, we broaden this definition to apply to any category, and define product in terms of universal properties. We start by observing that the essential property of a product \mathcal{P} is that it can be mapped to two independent projections, a left projection \mathcal{L} and a right projection \mathcal{R}. With this, we define a candidate \mathcal{P'} for the product as an object that can yield \mathcal{L} and \mathcal{R}.

But there may be many such candidates — which one do we pick? For instance, the tuple (X, Y) can be mapped to X and Y, but so can (X, Y, Z) — simply by ignoring Z. We can thus think of each of the remaining candidates as containing some degree of additional noise that should be ignored. In other words, (X, Y, Z) can be uniquely mapped to (X, Y). This is true of any such candidate, and we can represent this idea in the following diagram.

l = l' \circ{} m\\
r = r' \circ{} m

There is a catch, however. You could still have many candidates that look like \mathcal{P}. For instance, both (X, Y) and (Y, X) are equivalent in terms of their information content, and there is no reason to consider one to be a better candidate than the other for \mathcal{P} — they are isomorphic. Or expressed more formally, \mathcal{P} is not unique, but “unique up to isomorphism”.

Coproduct

A coproduct is a dual or inverse of a product, which you can construct by inverting all the arrows from the previous example.

In the context of sets, the coproduct \mathcal{C} represents a disjoint union (also known as tagged union) of \mathcal{L} and \mathcal{R}.

p = m \circ{} p' \\
q = m \circ{} q'

More on the “category zoo” — next time. That’s all for now, folks! 🖖

The Command Pattern

Modularity is a desirable property of any reasonably complex software. Modularity allows the programmer to examine, understand and change parts — modules — of the software while temporarily ignoring the rest of it. When the software becomes too large for a single programmer to work on it sequentially, modularity allows a team of individuals to work on it in parallel. Ideally, modularity is recursive — modules should themselves consist of modules, and so on, until each module is small enough for an individual to grasp quickly, even with an arbitrarily large team.

From one perspective, modularity is less about breaking down software into smaller modules, and more about creating small modules that can be easily combined with other modules to create large systems. Combining modules is called ‘composition’, and composability is the holy grail of software design.

Functions are the ultimate tool in the toolbox of composition. In mathematics, a function has inputs and outputs, and its definition represents the mapping of inputs to outputs. In the computational world, these mappings may be viewed as transformations of inputs into outputs. Functions are inherently composable, as under the right conditions, the outputs of one function may be connected to the input of another function (even itself). Unfortunately, functions in mainstream programming languages are impure in the sense that they may do other things, such as write bytes to disk or send data over a network. These so-called ‘effects’ hinder our ability to compose functions using their mathematical representations, unless the effects themselves are modeled as first-class inputs or outputs.

Only a few languages like Haskell support pure functions — functions that are free of effects. In Haskell, effects are possible only if they are modeled as first-class inputs or outputs. In these cases, effects are encoded into the runtime instances of a special type called IO, and it is the responsibility of language runtime to execute these effects on behalf of the programmer. For example, in the program below, the main function returns an IO instance, and the language runtime executes the effects encapsulated by the instance. In fact, without resorting to backdoor (aka unsafe) techniques, it is not possible to specify effects within a function that doesn’t return an IO instance. With the IO type, Haskell programs are smart enough to declare which functions are effectful, and these functions can be distinguished from ones that are not.

import System.IO (BufferMode (NoBuffering), hSetBuffering, stdout)

-- Main program.
-- This function returns an IO instance.
main :: IO ()
main = do
    hSetBuffering stdout NoBuffering          -- :: IO ()
    putStr $ "Enter a number x: "             -- :: IO ()
    x <- getLine                              -- :: IO String
    putStr $ "Enter a number y: "             -- :: IO ()
    y <- getLine                              -- :: IO String
    putStrLn $ show $ mult (read x) (read y)  -- :: IO ()

-- Multiply two numbers.
-- This function cannot write to disk or send data over the network.
mult :: Int -> Int -> Int
mult x y = x * y

Effects represented by IO instances can themselves be combined, but only sequentially. In the example above, each line within the main function returns some kind of IO instance. These IO instances are strung together to create a single combined expression…which is itself an IO instance. Furthermore, the computational results of an IO instance can never be extracted into a pure function: once you enter the real world, you can never come back.

Object Oriented Languages

In object-oriented languages like Java, composability still remains the holy grail of software development, and so lessons from the functional world are applicable. The essence of the ideas described above can be boiled down into a simple rule of thumb —

When you need to perform an action that deals with the external world, like writing to disk or sending data over the network, encapsulate the action within a ‘command’, and separate the decision of performing the action from actually performing the action.

This separation allows you as the programmer to inspect, re-arrange and re-compose your effectful code easily. Given adequately precise shapes for commands, the compiler will even aid you in making these changes safe. The command interface is analogous to the IO type in Haskell. And just like IO, you can string together commands to construct more sophisticated composite ones.

Once you are speaking the language of commands, you can perform computations on the commands themselves. For instance, suppose that instead of printing a message to the screen, you create a ‘log statement’ object that is capable of printing a message to the screen, and then invoke it. You can now enrich all log statements with timestamps, apply filtering based on various criteria and perform other actions before or after you print messages. As another example, suppose that instead of calling a remote web service, you create a ‘service invoker’ object that is given all of the information it needs to call the remote web service, and then invoke it. You can now apply throttling and caching mechanisms that control the flow of how these effects are performed.

The real world is messy, uncertain and error-prone. If we can limit the interactions of our systems with the external world to a few points at the edges, our software is that much more robust, and easier to develop, operate and maintain.

That’s all for today, folks! 🖖

The Untyped λ-Calculus

The untyped λ-calculus is a formal system in mathematical logic that expresses computations. Each computation is presented as a term, and terms may themselves be composed of additional terms. Every term is either a variable, an abstraction or an application. Evaluating a program is equivalent to starting with a term and performing a series of substitutions, based on a set of formal rules.

Grammar

The ABNF grammar for the untyped λ-calculus is shown below. For simplicity, we assume that whitespace is not significant except as a delimiter between terms (where needed).

term := var | abs | app
abs  := "λ" var "." term
app  := term term

Example

Given below is an example of a well-formed term, where x, y and z are variables, \lambda x.x and \lambda xy.xyz are abstractions, xyz and the term as a whole are applications. Here, we take the liberty of assuming that xy actually denotes two separate variables (the application of x to y).

(\lambda x.x)(\lambda xy.xyz)

Note that there are actually four variables in this expression – the x variables in the two parenthesized terms are completely unrelated as they are bound to different abstractions. When a variable is not bound to any abstraction, it is said to be free.

\left(\overbrace{\lambda x.\underbrace{x}_{\text{\footnotesize Bound Variable}}}^{\text{\footnotesize Abstraction}}\right)\left(\overbrace{\lambda xy.xy\underbrace{z}_{\text{\footnotesize Free Variable}}}^{\text{\footnotesize Abstraction}}\right)

Substitution Rules

α-conversion

Within the scope of an abstraction, a bound variable may be freely substituted with any symbol that isn’t already in use (which means it is neither free nor bound within the current context). This is known as an α-conversion. For instance, the example above is equivalent to the following expressions.

\left(\lambda x.x\right)\left(\lambda wk.wkz\right)
\left(\lambda y.y\right)\left(\lambda xy.xyz\right)

However, it is not equivalent the following expression, as it inadvertently converts the free z into a bound one.

\left(\lambda x.x\right)\left(\lambda zy.zyz\right)

β-reduction

When an abstraction is applied to a term, the former can be reduced by substituting every occurrence of the first bound variable of the abstraction with the term it is applied to. This is known as a β-reduction. In traditional programming languages, this corresponds to function application, converting a complex expression into something simpler. Our original example above reduces to the following.

\left(\lambda x.x\right)\left(\lambda wk.wkz\right)\xrightarrow{β}\lambda wk.wkz

The first term, (\lambda x.x), is equivalent to a function that returns its argument as-is – the identity function. Applying the identity function to the second term simply returns the second term.

Note that β-reduction doesn’t always simplify the term. In some cases, further reduction may yield the same term ad infinitum, in which case the term is said to be in a ‘minimal form’. In other cases, the term may actually become bigger with each reduction, in which case it is said to diverge. In all other cases, when no further βreduction is possible, the term is said to be in β-normal form.

\left(\lambda x.x x\right)\left(\lambda x.x x\right) \tag{\text{Minimal}}
\left(\lambda x.xxy\right)\left(\lambda x.xxy\right) \tag{\text{Divergent}}

Writing an interpreter for the untyped λ-calculus is relatively straightforward in Haskell.

Preparation

  • Make sure the stack is installed on your system.
  • Clone the package from GitHub to run the code.
$ git clone https://github.com/rri/untype.git
$ cd untype
$ stack build
$ stack test
$ stack exec -- untype

A Quick Walkthrough

Link to GitHub

The core data structure to represent terms mimics the ABNF grammar described earlier. This recursive type declaration is easy to define in Haskell, as it uses a non-strict evaluation strategy.

data Term
    = Var Sym       -- ^ Variable
    | Abs Sym Term  -- ^ Abstraction
    | App Term Term -- ^ Application
    deriving (Eq)

For contrast, a similar data structure in Rust would have looked like this (notice the Box type that adds a level of indirection).

pub enum Term {
    Var(Sym),
    Abs(Sym, Box<Term>),
    App(Box<Term>, Box<Term>),
}

The general strategy here is to accept an expression as newline-terminated text, apply a parser to the input to derive an abstract syntax tree, apply α-conversion and β-reduction strategies on the term until it converges, and then finally print it to the screen.

We use attoparsec, a nifty parser-combinator library, to write simple parsers and combine them into larger ones. As we apply a few basic parsers recursively, we need to look out for a few gotchas, called out in the code, that might cause the parser to loop indefinitely or give us incorrect results.

Finally, determining when to generate fresh variables and how to name them is surprisingly challenging. Here is an example when variable names must be substituted prior to reduction.

\overbrace{\left(\lambda x y.x y y\right)}^{\text{\footnotesize Term 1}}\overbrace{\left(\lambda u.u y x\right)}^{\text{\footnotesize Term 2}}

Here, Term 2 needs to replace the x within Term 1 as part of the β-reduction step. However, Term 2 already has a y, conflicting with the y – a distinct bound variable – in Term 1. We therefore need to replace the y in Term 1 with a fresh variable, and only then proceed with the substitution.

We leverage a simple strategy for generating fresh variables. First, we collect free variables across the whole term. Then, as we traverse the term, we keep track of all bound variables in the current context. Whenever we need a fresh variable, we take the original and append an apostrophe (') at the end. We then check this new variable against the list of free variables as well as the list of bound variables in the current context. If there are no collisions, we’re done; if not, we repeat the process.

That’s all for today, folks! 🖖