in the past week or so, as mentioned (in brief) previously, i've been learning a lot about the development of how we came to understand the second-order lambda calculus, data abstraction, and eventually modern-day module systems. as a frequent SML user, i think i have (and had before this week) a good intuitive grasp on how the module system works, but only recently have i come to have a deeper understanding of what's under the hood, logically speaking, and how it got that way.

i want to try to explain just the surface-scratching essentials of this logical and historical gem, as an expository exercise. i'm primarily interested in telling you about the dialogue between mitchell-plotkin and macqueen in the two papers linked, but i may also involve (and contrast) the modern take in f-ing modules*, which we discussed in RG yesterday. all of these papers are quite good, so read them if you want, but with any luck you won't need to refer to them for my explanation to make sense.

first let's just talk about first-order logic, particularly existential quantifiers. so you have propositions that look like ∃x:τ.P(x) (where τ might be some arbitrary object domain, like natural numbers), and the rule you might have learned for coming up with such a thing might be


t:τ    P(t)
-----------
∃x:τ.P(x)


and for using such a thing might be


∃x:τ.P(x)     x:τ, P(x) |- C
-----------------------------
            C


for any proposition C that's closed with respect to x. this is the formulation under which you can think of an existential as an infinitary "or"; specifically, it's closed scope elim is similar:


A v B     A |- C     B |- C
---------------------------
            C


but there's another way you might like to view these things: after all, the way you came up with it looks an awful lot like you supplied two things, a term and a predicate over it. but when you *use* it, you don't get to know what it is, you just get that it has type τ and that P holds of it and you and work parametrically over that. you don't have direct access to the witness of the existential. what if instead we viewed a proof of this thing as a pair, and gave it elimination rules to project out the components? we can do this with proof terms:


M : ∃x:τ.P(x)
-------------
&pi1M : τ



M : ∃x:τ.P(x)
-----------------
&pi2M : [π1M/x] P(x)


these two different formulations of ∃ are called "weak sums" and "strong sums", respectively. strong sums cough up more from the existential, but they're messier to deal with because now you have to worry about substituting proof terms into types.

(tangentially, if you replace τ with a metavariable ranging over propositions, you've got dependent types in the usual sense (props depending on proofs of props), and you could tell a similar story there, but i wanted to start with something you might have seen in a vanilla intro logic course, and afaict that's usually how they do it.)

now fast-forward the the early 70s and 80s: people are motivated by programming languages much more than before. this gives rise to Girard's and Reynolds' independent discovery of essentially the same calculus: second-order logic, or the polymorphic lambda calculus. you get this if you take first-order logic and now let the quantifier-bound variables range over types (AKA propositions) instead of terms. with ∀, this results in polymorphism, e.g. "∀a.a → a" is the type of the polymorphic identity function.

with ∃, it turns out you can describe data abstraction. for example "∃a.a×a → bool" might be the type "comparable" inhabited by anything that has a comparison function or "∃a.(real×real → a) × (a → real) × (a → real)" might be the type describing a geometric point, which has a function for making such a thing out of two real coordinates, and ways of getting the xcoord and ycoord out of it, but the implementation is left abstract.

the question of import is: are these things strong sums or weak sums?

mitchell and plotkin say they are weak sums. this means the proof terms and their typing rules look like


T type      M : [T/a]B
------------------------
pack∃a.B (T, M) : ∃a.B

M : ∃a.B      a type, x:B |- N : C
------------------------------------
unpack (a, x) as M in N : C


where, again, C is not allowed to mention the existentially bound thing (a).

they explicitly argue against the idea of "projecting" out the first component -- that is, the type component -- quite correctly, because that would defeat the very purpose, i.e. abstraction. as soon as you can project out the type component used to implement the interface expressed by the existential, then you're no longer abstract in it. what's more, the naive change to the system would actually render it inconsistent; you can get type:type with impredicative strong sums.

but if you think about it a little harder, the weak existentials seem awfully restrictive: the type is forgotten, and you can no longer refer to it at all except inside the scope of a completely hypothetical thing. in short, this isn't modular. if you define a module somewhere, shrinkwrap it, and send it to your coworker, then he has to open it at the very top-level of his code to get at the existence of its type component. and if he ever opens it a second time, e.g.

unpack (a, x) as M in (unpack (b, y) as M in [...])

does not yield that a=b or x=y. moreover, if you ever have *several* modules you want to use, you have to open them *all* at the top level, so now you've got namespace collisions to worry about; you've forgotten that you had a bunch of cleanly separated program components and just thrown all their guts into the same bag o' code.

furthermore, sometimes you *do* want to leak your implementation (i.e. expose the witness type to the client), but you still want to give it an alias to use consistently in your module (like type point = real * real).

so along comes macqueen and says, "we really would like these to look more like strong sums." so he invents this stratified dependent type theory mumbo jumbo that solves the type:type problem and describes a calculus with projections witness and out to get a hold of the type and term components of the module.

but the original objection still holds: this may achieve modularity, but it sacrifices abstraction. neither author seems to fully come clean with (or maybe even fully understand) the fact that there are sacrifices on both sides, and neither offers an alternative to the strengths of the others' work.

it was only much later that we hit upon the notion of sealing. if you essentially treat a module as a strong sums, but you seal it at a particular signature (existential type), then you hide the implementation... with the caveat that sealing is a static effect, so you don't get the equality described above (e.g. if you write down a structure and seal it, then do the same identical thing again, the two sealed structures are not equal, nor are their components)... unless you declare it somewhere further out with a name you can refer to. what's more is you can have transparent ascription that allows you to seal a module, but blab the secret identity of some of its type components in the signature you seal it at, thereby making it accessible to the rest of the world. the formalism all this fits in is, uh, beyond the scope of this post? maybe next time. but it's all in the ML module system that we (or i, at least) tend to take completely for granted.

so, not to get all "respect your elders" about it, but i hope this lends to you, as it did to me, a greater appreciation for the way these simple little logical tools of the past grew up into big powerful module systems -- with equally sexy but much more modern type theory -- via these first glimpses at the tension between modularity and abstraction.

*looks like i didn't get to this. if i do a sequel i will probably talk about it as well as phase splitting. should i do that or was this boring/badly explained? moar y/n?
This account has disabled anonymous posting.
If you don't have an account you can create one now.
HTML doesn't work in the subject.
More info about formatting

Profile

chrisamaphone

August 2014

S M T W T F S
     12
3456789
10111213141516
17181920 21 2223
24252627282930
31      

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Jun. 9th, 2025 12:54 am
Powered by Dreamwidth Studios