occasionally you'll hear someone ask, "are higher-order functions 'higher-order' in the same sense as higher-order logic?" here is a possible answer.

from my understanding of current logical diction, something like the following story is true:

once upon a time, humans invented lambda calculus. they said, "a function that takes a functional argument (or returns a functional value) is a higher-order function." they wrote that definition down, and then two different humans read it.

the first said, "ah! a 'higher-order' function is one that operates on a function. so, in general, for something to be 'higher-order' means it operates over functions."

the second said, "ah! a higher-order function is a function over functions. to generalize, a higher-order X is an X that operates over other Xes."

error! and from their teachings sprang many logician progeny of both lineages, and the confusion remains.

some examples:
- higher-order abstract syntax. "lam : (tm -> tm) -> tm" is higher order in that "oh look there's a left nested arrow", i.e. we're describing syntax in terms of function types (and for that reason may be better deserving of the abstract higher-order syntax coin).
- higher-order judgment. in twelf lingo we use this to mean, again, a judgment over open terms (like (tm -> tm) -> type) rather than what you might think following generalization (2), a judgment about judgments (i.e. a metatheorem).
- higher-order logic. if we squint for the sake of uniformity and pretend that by "logic" we mean "predicates", then i think it's accurate to interpret this as an instance of generalization (2), predicates over predicates. it certainly doesn't mean "logic about functional data", which is just propositional logic.

despite my NPOV efforts, it might be coming through that i'm a little annoyed with generalization 1. it just seems a little bit provincial or narrow-sighted, like the only thing that we could possibly want to get fancy with is the types of the things we describe, rather than the (quite literally) more judgmental-methodology-motivated notions of our system. of course, in practice, i use the jargon that will get me understood, but if i got to prescriptively reinvent it i'd change the word for generalization 1 to something like "functional" (where of course in this perfect universe, "functional programming" would be renamed to "value programming").
in the past week or so, as mentioned (in brief) previously, i've been learning a lot about the development of how we came to understand the second-order lambda calculus, data abstraction, and eventually modern-day module systems. as a frequent SML user, i think i have (and had before this week) a good intuitive grasp on how the module system works, but only recently have i come to have a deeper understanding of what's under the hood, logically speaking, and how it got that way.

i want to try to explain just the surface-scratching essentials of this logical and historical gem, as an expository exercise. i'm primarily interested in telling you about the dialogue between mitchell-plotkin and macqueen in the two papers linked, but i may also involve (and contrast) the modern take in f-ing modules*, which we discussed in RG yesterday. all of these papers are quite good, so read them if you want, but with any luck you won't need to refer to them for my explanation to make sense.

mini logic lesson )

*looks like i didn't get to this. if i do a sequel i will probably talk about it as well as phase splitting. should i do that or was this boring/badly explained? moar y/n?



August 2014

17181920 21 2223


RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Oct. 21st, 2017 05:47 pm
Powered by Dreamwidth Studios