Parametric polymorphic functions have very uniform behaviour, “works the same way for all types”; a few test cases with a few concrete types can tell you a lot about the general case. For example, the following conversation is possible:
A: I have implemented f :: a -> [a]
B: Don’t show me the code! But show me what f ()
happens to
be.
A: It’s [(), (), ()]
.
B: Then I also know: f 4
is [4, 4, 4]
.
This comes from an important theorem that’s pretty daunting to state. Fortunately, it is closely related to a theorem about programming with abstract types. I think that the abstract type story is more relatable, so I will start with it, and then we can transition to the parametric polymorphism story.
But here is how the two are connected. Suppose there is a module ZS that exports these items:
abstract type A
z :: A
s :: A -> A
then this program that uses ZS
import ZS(A, z, s)
duo :: A
duo = s (s z)
is very much like this polymorphic function
polyduo :: a -> (a -> a) -> a
polyduo z s = s (s z)
Hopefully you are also thrilled to see:
If you rewrite duo to polyduo, that’s dependency injection.
If you test either by giving it a mock version of (A, z, s), that’s mock testing.
Emphatically, you use the test result to predict what will happen under a production version of (A, z, s), which brings us back to parametric polymorphism!
Beautifully we have a nice little piece of 1980s programming language theory that explains and unifies multiple programming practices.
An abstract type can have many possible implementations. A program that uses the abstract type is given one such underlying implementation, and then later someone may switch to another. So there is a correctness question to ask, and it takes one of these forms:
Do the two implementations “behave the same”, and in what sense? How to make this precise?
Does the program “behave the same as before” after switching? What is a strategy for answering this?
I will walk through the strategy with an example of an abstract type of sets of Ints:
abstract type IntSet
emptyset :: IntSet
isEmpty :: IntSet -> Bool
insert :: Int -> IntSet -> IntSet
lookup :: Int -> IntSet -> Bool
I have a list implementation and a binary search tree implementation. In each one, I change the names to have suffixes (in code) or subscripts (in sentences) to help me to refer them later:
List version pseudocode:
Suffix “_L” or subscript L, e.g., IntSet_L
or
IntSetL.
IntSet_L = [Int]
emptyset_L = []
isEmpty_L = \s -> null s
insert_L = \i s -> if lookup_L i s
then s
else i : s
lookup_L = the usual linear search
-- Note: No element occurs more than once. This becomes a validity condition,
-- aka data invariant.
BST version pseudocode:
Suffix “_B” or subscript B, e.g., IntSet_B
or
IntSetB.
data IntBST = Empty | Node Int IntBST IntBST
IntSet_B = IntBST
emptyset_B = Empty
isEmpty_B = \t -> case t of Empty -> True
_ -> False
insert_B = the usual BST insert
lookup_B = the usual BST lookup
-- Note: Elements are in BST order. This becomes a validity condition,
-- aka data invariant.
Notation: I write a tree in this format:
Example: 2 at root, 1 on left: ((1)2)
Example: 1 at root, 2 on right: (1(2))
Actually you just need to know those two examples. :)
How to make precise “they behave the same” starts from this idea:
We have the insight that, e.g., [1,2] is a valid list representation, (1(2)) is a valid tree representation, and they represent the same set. Perhaps we can design a relation/correspondence, call it “~”, between lists and trees so that “xs ~ t” means that xs and t are respective valid representations of the same set.
So I define: xs ~ t iff
Examples:
Negative examples:
Then the following are true, and they are the precise meaning of “the two implementations behave the same”, or sometimes I also say “the two implementations are in correspondence”.
Generally, the correspondence uses “~” between the two representation of IntSet values (lists vs trees), “=” between Int values, “=” again between Bool values, and extends this to functions by “if inputs correspond, then outputs correspond”.
Now we can talk about programs that uses IntSet. Here is an example program:
use :: Int -> Int -> Bool
use i j = let
s0 = emptyset
s1 = insert i s0
s2 = insert j s1
in lookup i s2
Does it give the same answer whether you use IntSetL or IntSetB? Yes. Let me clone the code and add suffixes to help explain:
|
|
Step through both versions to check:
This works for every program that uses IntSet. (And every program that doesn’t, for that matter.)
Zooming back out, this is what you do when you have two implementations for an abstract type. First fulfill this prerequisite:
There are two representations; so think up a good relation between the two. (How good is good enough? You just need to make the next point work.)
For each operation, you have two implementations; so verify that the two are “in correspondence”, under the relation you thought up.
Then you can conclude:
(Similarly if you have multiple abstract types.)
The type abstraction theorem is a general and precise way to state the above.
The type abstraction theorem is daunting to state because it takes some setting up to make “in correspondence” precise and cover all cases. The particular challenges are: Sometimes it means “=”, some other times it means my (or your) custom “~” relation; and we ambitiously extend it to function types, e.g., IntSet → Bool.
We have two implementations of IntSet; let me call one of them “left side” and “on the left”, the other “right side” and “on the right”. Here is the ultimate purpose of the definitions. For arbitrary type expression T:
“DomL(T)” means what T becomes on the left. Example: DomL(IntSet→Bool) = [Int]→Bool.
“DomL” is short for “domain on the left”.
“DomR(T)” means what T becomes on the right. Example: DomR(IntSet→Bool) = IntBST→Bool.
“DomR” is short for “domain on the right”.
“⟨T⟩” means the appropriate correspondence for T. Examples: ⟨Int⟩ is equality, ⟨IntSet⟩ is my “~”.
I designed the notation to be usable infix, e.g., 4⟨Int⟩4 means 4=4 and is true, 4⟨Int⟩5 means 4=5 and is false.
As for function types, we want “isEmptyL ⟨IntSet→Bool⟩ isEmptyB” to be a thing and be true, for example. This is the hard part, but I think I have primed you for it.
It is true that ⟨T⟩ is a relation between DomL(T) on the left and DomR(T) on the right.
These can be achieved by structural recursion on type expressions. (I think you can already guess how to do DomL and DomR.)
DomL and DomR are defined by:
DomL(Bool) = Bool
DomL(Int) = Int DomL(IntSet) = [Int] DomL(T → U) = DomL(T) → DomL(U) |
DomR(Bool) = Bool
DomR(Int) = Int DomR(IntSet) = IntBST DomR(T → U) = DomR(T) → DomR(U) |
⟨T⟩ is defined by:
For ⟨T→U⟩, here is the motivation. Recall that, for example, isEmptyL and isEmptyB are in correspondence in the sense that “if inputs are in correspondence, then outputs are in correspondence”; formally:
if xs ~ t
, then isEmptyL xs =
isEmptyB t
I would very much like to make that the exact meaning of “isEmptyL ⟨IntSet→Bool⟩ isEmptyB”. In longer form, I want:
And observe that I can use DomL, DomR, ⟨IntSet⟩, and ⟨Bool⟩ to make it generalizable:
The general form goes as:
With that, the following are true of the two implementations:
That allows us to deduce: useL ⟨Int→Int→Bool⟩ useB. Generally, if program p has type T, then (p under list impl) ⟨T⟩ (p under tree impl).
The same reasoning applies to other abstract types and programs. This framework is summarized as the abstraction theorem (for less clutter, I state it for just one abstract type and one operation):
If:
then:
(Similarly if you have more operations and multiple abstract types.)
I now transition from programming with abstract types to parametric polymorphic functions. For less clutter, allow me to switch to a shorter toy example:
import ZS(A, z, s)
-- abstract type A
-- z :: A
-- s :: A -> A
duo :: A
duo = s (s z)
The type abstraction theorem applied to this says:
If:
then:
Let me rewrite that to this form:
If you apply some kind of dependency injection, my toy program can be converted to this parametric polymorphic function:
polyduo :: ∀a . a -> (a -> a) -> a
polyduo z s = s (s z)
Then “(duo under AL, zL, sL)” is converted to “polyduo zL sL”. Where has AL gone? It is hidden under the act of “instantiate ‘a’ to AL”. Some people explicate it with the subscript notation “polyduoAL”. I will write like that for a while, but eventually omit it. Similarly for “(duo under AR, zR, sR)”.
And the type abstraction theorem can be converted accordingly:
Note that the final “if-then” part is “if inputs to polyduo correspond, then outputs to polyduo correspond”, i.e., the definition of ⟨a → (a → a) → a⟩. So we can simplify:
I want that to be the meaning of “polyduo ⟨∀a . a → (a → a) → a⟩ polyduo”, extending the definition of ⟨T⟩ to polymorphic types. This motivates the general form:
With that definition, this is true of polyduo:
polyduo ⟨∀a . a → (a → a) → a⟩ polyduo
The parametricity theorem states this in general:
For all type T in which all type variables are ∀-quantified (“T is a closed type”), for all term e::T : e⟨T⟩e.
This theorem looks very anti-climatic. But recall that there is a lot of content in how ⟨∀a. T⟩ and ⟨T→U⟩ are defined, and how ⟨a⟩ can harbour an interesting relation for a type variable (or abstract type).
This theorem is behind how we can use a few test cases on a polymorphic function to discover fairly general behaviour.
Suppose e :: ∀a . a → (a → a) → a . The parametricity theorem expands to:
I will test “e 0 succ”, where succ is defined by: succ n = n+1. This amounts to specializing the left side to: AL=Integer, zL=0, sL=succ. I will leave the right side arbitrary, so as to draw general conclusions about “e zR sR”.
The relation I design for ~ or ⟨a⟩ is harder, but here are my design considerations:
I want “0 ⟨a⟩ zR” and “succ ⟨a→a⟩ sR” to be true, so that I can conclude that “e 0 succ ⟨a⟩ e zR sR” is also true.
“succ ⟨a→a⟩ sR” expands to:
That really looks like what you would do in an inductive definition. And “0 ~ zR” really looks like a base case, too.
I want a minimal relation, so that “e 0 succ ⟨a⟩ e zR sR” is helpful.
Here is an exaggerated example of how a non-minimal relation is unhelpful. If I defined x~y to be true for all x::Integer and y::AR, the conclusion “e 0 succ ⟨a⟩ e zR sR” would be also trivially true without telling me anything.
So I want x~y to be true for enough pairs of x and y, but false for others.
This point and the previous point together really say I want an inductive definition.
So I define ~ inductively (a.k.a. “the smallest relation such that” and “if x~y then it has to come from using these clauses a finite number of times”):
It works as a partial function from the left to the right: it maps 0 to zR, 1 to sR zR, 2 to sR (sR zR), etc.; and it doesn’t map negative integers to anything. More importantly, if later we find that 2 ~ foo for example, then we know foo = sR (sR zR), there is no other choice.
Then we get:
Suppose testing dicovers e 0 succ = 2. Then e zR sR = sR (sR zR), and this works for all AR, zR, and sR.
Moreover, for every f :: ∀a . a → (a → a) → a that terminates (in more detail: f z s terminates if z and s terminate):
So we have basically characterized all functions of type ∀a . a → (a → a) → a under suitable assumptions about termination.
Suppose trio :: ∀a. a → [a] . Then the parametricity theorem expands to:
I haven’t told you what to do with ⟨[T]⟩, but here it is: Define inductively (again “smalllest relation” etc.):
You can understand it as: the two lists have the same length, and the respective elements are related by ⟨T⟩. But I show you the inductive definition because it generalizes to all algebraic data types. (But if you generalize “have the same length” to “have the same shape”, that’s the right idea.)
It also turns out that in this and similar examples, you design the element-wise relation ⟨T⟩ to work like a function for simplicity. So suppose you have a function h, and x⟨T⟩y iff h x = y. Then xs⟨[T]⟩ys says:
This generalizes: For F an instance of Functor, xs⟨F T⟩ys iff fmap h xs = ys.
Going back to trio :: ∀a. a → [a], and using a function for ⟨a⟩:
I use the test case “trio ()”, so I’m specializing to: AL=(), xL=(), h () = xR.
Suppose the test result is trio () = [(), (), ()]. Now I know:
trio xR = fmap h (trio ()) = fmap h [(), (), ()] = [xR, xR, xR]
The standard library function fst :: (a,b) -> a
extracts the 1st field from the pair. Conversely, we find it intuitive
that another function of that type has to do the same thing (or else
hangs). This is justified by the parametricity theorem too.
Suppose f :: ∀a,b. (a,b) → a . You can think of that type as nesting ∀’s: ∀a. ∀b. (a,b) → a . Expanding the parametricity theorem, and using functions for relations:
I haven’t said what to do with ⟨(a,b)⟩, but it’s easily guessable:
(xl,yl) ⟨(T,U)⟩ (xr,yr) iff: xl⟨T⟩xr and yl⟨U⟩yr
Using that, and expanding “for all pL::(AL,BL)” to “for all xl::AL, yl::BL”, similarly for pR:
I now specialize: AL=AR, BL=BR, xl=xr, yl=yr, h x = xr (for all x), k y = yr (for all y). Idea: Force h (f (xl,yl)) = xr.
There!
The two names “parametric polymorphism” and “ad hoc polymorphism” were coined by Strachey for the difference in:
Parametric: A polymorphic program behaves the same way for all types.
Ad hoc: A polymorphic program has unrelated meanings for different types.
The parametricity theorem is the outcome of Reynolds working out a mathematical definition. It was also his idea to start with the abstract type story.
Ad hoc polymorphism allows a program of type ∀a.T to do “if ‘a’ is
Int
, do something special”. This breaks the parametricity
theorem.
Likewise, if a user of IntSet
can do “if
IntSet
is a list, do something special”, this doesn’t treat
IntSet
as an abstract type, so it breaks the type
abstraction theorem.
The kind of polymorphism provided by OO languages leans on the ad hoc side, e.g., two subclasses can implement a method in arbitrarily unrelated ways, and so the knowledge that they belong to the same superclass isn’t very informative.
In a principles of languages course, I’m obliged to fearmonger you with: If a language allows this, then someone will do it, and you have a much harder time reasoning about programs.
Outside, in fairness, programmers don’t set out to troll each other. You would implement the two subclasses in conceptually related ways, even though the precise relation is difficult to formalize. In this sense, programmers stay close to the spirit of parametric polymorphism, even in a language of ad hoc polymorphism. Still, this is open to one more kind of bugs, so watch out.