Lately I’ve been auditing a course in program analysis and synthesis and the material presented in it keeps reminding me of category theory. This is my attempt to describe a categorical intuition for static analysis (or a static analysis co-intuition of category theory).

Disclaimer: Since my goal is developing intuition, I won’t attempt any rigorous definitions or proofs; nor will I dwell too much on expository material.

**Abstract Interpretation**

Static analysis tries to answer questions about a program’s behavior without actually executing it, by reasoning about program semantics. Since computing the exact concrete semantics of a program is undecidable, *abstract interpretation* uses *sound* over-approximations of semantics.

Two important aspects of the abstract interpretation scheme are how to abstract program properties and transformers (selecting abstract domains); and how to abstract program semantics (moving between abstract domains). We can think of those as *what* and *how* abstractions, respectively.

In the diagram below, semantic abstractions are the vertical arrows between abstract domains (rectangles) and the abstract transformers are the horizontal arrows.

**Categorical Interpretation**

Abstract domains and order-preserving elementary program transformers are modeled with complete semi-lattices. Semantic abstractions themselves are composed using monotone functions.

We can interpret (ha!) the lattice of abstract domains as a category, where semantic states, or configurations, are objects and an arrow exists between two configurations if there’s an abstract transformer from to . In this category there’s correspondence between lattice properties and some categorical limits:

- Top/Bottom are Terminal/Initial objects, respectively.
- Meets/joins are products/coproducts, respectively.

Now we notice that the monotone functions between semantic representations are functors between abstract domain categories. Thus, *abstractions are functors*. Since we don’t want to abstract indefinitely – eventually we would like to come back to the “real world” – we have *concretizations*, or functors going in the opposite direction. And this is where we get to the crux of the matter: *Abstraction and concretization are adjoint functors.*

**Why is this cool?**

For one, this shows why adjunctions “generalized” inverses: because abstraction followed by concretization gets us, approximately, back in the same place.

In addition, when the domain selection enables it, we can use Galois connections to compute *best abstract transformers*. Adjunctions are generalize Galois connections, so this is an example of how adjunctions capture the idea of “optimal solutions to a problem”.

Finally, this new setting gives rise to some interesting questions:

- Adjoint functors induce an “abstract interpretation monad”. Can we can “lift” programs to this monadic representation to easily utilize abstract interpretation?
- Semantic abstractions themselves form a lattice, so can we generalize further and get a higher order category?
- Axiomatic predicates of strongest post-condition and weakest pre-condition are also adjoint functors, so this may hint to a nice correspondence between operational and axiomatic semantics (and likewise for denotational semantics).
- What are other categorical limits/universal properties (pullbacks, equalizers etc.) in those categories?

**Summary**

Trying to understand static analysis in categorical terms actually helped me gain better intuition about category theory, since I’m not versed enough in algebraic geometry to relate to “common” intuitions. I was surprised to find very little research in this direction and I’m curious how these category theory insights can be put to practical use.