The second talk I gave at Unilog 2013 last month in Rio was at the Abstract Proof Theory Workshop, on Glue Semantics for Proof Theorists. The slides are here.
I've been meaning to talk about this since 2000, when Sol Feferman sent an email around asking where people thought Proof theory would be going in the new millenium. It sounds a bit quaint now, I know...
But back then I wrote a long email that I guess never got to Sol (he must have dozens of others like that) about Proof Theory in Natural Language Semantics. For me, the whole idea of using proofs in Natural Language derivations of sentencehood was a new thing, then. Something that I learned about from Mary Dalrymple, John Lamping, Vineet Gupta(we had weekly meetings around that time), Anette Frank, Josef van Genabith and especially the husband, Dick Crouch. I also planned to talk about that in Rome, when Claudia Casadio invited me to a workshop there. But I never got to the workshop, I had small kids, and a complicated job then.
Now 13 years later, things are different and what used to be a fringe application of proof theory to Natural Language is quite important, involving the possibility of millions of dollars, perhaps... It's certain that Natural Language is nowadays a sexy area of Computer Science, one where there are jobs, grants and projects. Maybe the symbolic ways of doing it are not that fashionable. But with the amount of interest and kudos sloshing around, surely some can be put into the idea (which I find surprisingly cute) that ambiguity in Natural Language can be cast as the problem of finding all the proofs in Natural Deduction of a certain atomic formula, from a given set of premisses, where cuts with the identity can be added at judicious points.
OK, I know people need more information than this to find it cute and I hope to write some more about it some time soon, but yeah, not only linguists (or computational semanticists) need to know about proofs, they also need to know how to insert into a proof "sensible" cuts with an identity. Now for proof theorists cuts with identity are totally trivial things. You've done your cut-elimination proof when you get to cuts with identity, you can always add cuts with identity anywhere you want, as they don't change things, so the paradigm has changed completely when you're doing "parsing as deduction". But since these deductions are our kinds of deductions in Glue Semantics (plain Curry-Howard implicational linear logic trees), our theory ought to be able to keep up with this idea...
So just for my own sake, let me rephrase the stuff above. When we study proofs in ND, thinking of the simply typed lambda-calculus as a way of coding (propositional intuitionistic logic) proofs Curry-Howard style, we know that there are two main paradigms to think about. One is the normalization paradigm: Given a typed term, corresponding to a proof either from zero or some finite number of assumptions, you try to normalize it and this normalization produces a computation. The second is the proof-search paradigm. Given a typed term with a fixed number of premisses you try to find a proof that produces the given term. Usually in the normalization paradigm you worrry about different derivations, as they correspond to different terms and you have to work to prove that these different terms all reduce to the same normal form, to some essential version. By contrast, in the proof search paradigm, given a collection of assumptions and a formula that follows from them, you are usually happy to find a (or one) proof that these assumptions prove the formula. You know there could be other proofs of the same formula from the same assumptions, but usually you're not worried about finding all the proofs of any given S from a collection of assumptions Gamma.
Some philosophers (like Prawitz himself) might straddle the two paradigms and worry about when two proofs should be considered the same (usually when they're beta equivalent and eta related), but for the linguists pursuing the kind of program sketched above this is crucial. Then need to know all the proofs of S from Gamma and moreover, they need to know which ones they should consider "the same". Further only in certain bits of the trees corresponding to this proofs, the modifiers (the degenerate trees that correspond to A proves A) can be inserted. So they need to construct all the proofs of S from the given assumptions, they need to decide when two such proofs should be equal (and hence have the same meaning, so we don't count them twice), they need to know where the modifiers can be inserted in an appropriate way, and they need to do it pretty fast. Why? Because if they want to use these trees as their sentence meanings, and if they have any hope of understanding a bit of text and replying to it while someone else is waiting for their system, then they have to be extremely good proof theorists. The problem of finding all the trees (meanings corresponding to the proofs of a sentence) fast is quite hard and very much the kind of problem that proof-theorists should be able to help with, if they can extend their theory in that direction.
I've been meaning to talk about this since 2000, when Sol Feferman sent an email around asking where people thought Proof theory would be going in the new millenium. It sounds a bit quaint now, I know...
But back then I wrote a long email that I guess never got to Sol (he must have dozens of others like that) about Proof Theory in Natural Language Semantics. For me, the whole idea of using proofs in Natural Language derivations of sentencehood was a new thing, then. Something that I learned about from Mary Dalrymple, John Lamping, Vineet Gupta(we had weekly meetings around that time), Anette Frank, Josef van Genabith and especially the husband, Dick Crouch. I also planned to talk about that in Rome, when Claudia Casadio invited me to a workshop there. But I never got to the workshop, I had small kids, and a complicated job then.
Now 13 years later, things are different and what used to be a fringe application of proof theory to Natural Language is quite important, involving the possibility of millions of dollars, perhaps... It's certain that Natural Language is nowadays a sexy area of Computer Science, one where there are jobs, grants and projects. Maybe the symbolic ways of doing it are not that fashionable. But with the amount of interest and kudos sloshing around, surely some can be put into the idea (which I find surprisingly cute) that ambiguity in Natural Language can be cast as the problem of finding all the proofs in Natural Deduction of a certain atomic formula, from a given set of premisses, where cuts with the identity can be added at judicious points.
OK, I know people need more information than this to find it cute and I hope to write some more about it some time soon, but yeah, not only linguists (or computational semanticists) need to know about proofs, they also need to know how to insert into a proof "sensible" cuts with an identity. Now for proof theorists cuts with identity are totally trivial things. You've done your cut-elimination proof when you get to cuts with identity, you can always add cuts with identity anywhere you want, as they don't change things, so the paradigm has changed completely when you're doing "parsing as deduction". But since these deductions are our kinds of deductions in Glue Semantics (plain Curry-Howard implicational linear logic trees), our theory ought to be able to keep up with this idea...
So just for my own sake, let me rephrase the stuff above. When we study proofs in ND, thinking of the simply typed lambda-calculus as a way of coding (propositional intuitionistic logic) proofs Curry-Howard style, we know that there are two main paradigms to think about. One is the normalization paradigm: Given a typed term, corresponding to a proof either from zero or some finite number of assumptions, you try to normalize it and this normalization produces a computation. The second is the proof-search paradigm. Given a typed term with a fixed number of premisses you try to find a proof that produces the given term. Usually in the normalization paradigm you worrry about different derivations, as they correspond to different terms and you have to work to prove that these different terms all reduce to the same normal form, to some essential version. By contrast, in the proof search paradigm, given a collection of assumptions and a formula that follows from them, you are usually happy to find a (or one) proof that these assumptions prove the formula. You know there could be other proofs of the same formula from the same assumptions, but usually you're not worried about finding all the proofs of any given S from a collection of assumptions Gamma.
Some philosophers (like Prawitz himself) might straddle the two paradigms and worry about when two proofs should be considered the same (usually when they're beta equivalent and eta related), but for the linguists pursuing the kind of program sketched above this is crucial. Then need to know all the proofs of S from Gamma and moreover, they need to know which ones they should consider "the same". Further only in certain bits of the trees corresponding to this proofs, the modifiers (the degenerate trees that correspond to A proves A) can be inserted. So they need to construct all the proofs of S from the given assumptions, they need to decide when two such proofs should be equal (and hence have the same meaning, so we don't count them twice), they need to know where the modifiers can be inserted in an appropriate way, and they need to do it pretty fast. Why? Because if they want to use these trees as their sentence meanings, and if they have any hope of understanding a bit of text and replying to it while someone else is waiting for their system, then they have to be extremely good proof theorists. The problem of finding all the trees (meanings corresponding to the proofs of a sentence) fast is quite hard and very much the kind of problem that proof-theorists should be able to help with, if they can extend their theory in that direction.
No comments:
Post a Comment