(This is a continuation of Graph Semantics and Examples, by Dick Crouch)
What is the difference between a graph-based semantic representation and a logical one? At one level, any logical formula can be represented in graphical form, so it may look as though the difference is just one of visualization preferences. But there is perhaps a deeper distinction. One of the startling innovations of Frege's rebooting of logic was to provide a notation where scope could be separated out from argument structure. Prior to this, the representation of "John loves Mary" was obvious:
but not so much so for "every man loves a woman"
Even
if you can treat "every man" along the same lines as a name like John,
how could you even begin to account for the scope ambiguity. Frege's
trick was to separate out the introduction of the term from the point at
which that term becomes an argument:
However,
it is not transparently obvious to untutored eyes which occurrences of
the variables x and y serve to introduce and scope [man(x), woman(y)]
and which serve to indicate argument structure [love(x,y)]. Both are
woven into the same notation. The tripartite structures proposed by Barbara Partee make the distinction a little clearer, but at the cost of more complex and expressive notations.
The
layered graphical representation is an attempt at a much cleaner
separation of these concerns. Predicate-argument structure is
represented as unadulterated predicate-argument structure, without
recourse to variables. Scope and context is represented by a mapping
onto a separate structure. Thus, amongst other things, it becomes
trivially easy to extract just the predicate-argument structure from the
representation. But there is still something of a promissory note here:
to really make good on this claim, a way of representing quantifier
scope by likening it to contextual scope is required. In practical terms
this is perhaps not of the utmost priority, but theoretically it is.
Modality, Intensionality and Quantifiers
Most computer scientists coming to modal logic nowadays see it as a special case of first order logic (quantify over possible worlds). I am inclined to explore the opposite view, and see first order logic as a special case of modal logic, where scope-inducing quantifiers are both rare and act like modal operators. Here is an initial sketch of how this might work.
First, noun phrases refer to collections of objects. (Well, actually, following the Preventing Existence and Counting Contexts
papers, noun phrases refer to concepts that have cardinality
restrictions). For example, "three men" refers to a group object of men,
with cardinality three; "every man" refers to a group object of men
that includes all men in the domain of discourse; and "three boys ate
five pizzas" means, under its most natural reading that there were three
boys and there were five pizzas, and the pizzas got eaten by the boys.
Under certain circumstances, though, the terms "three boys" or "five
pizzas" can be interpreted distributively; e.g.. there was a collection
of five boys, and each member of the collection ate three pizzas.
Distributive
quantification is not yet implemented in the current system. But here
are some hand-constructed examples of how this might work for "Every man
loves a woman"
Distribution
over "every man" introduces a new context, D(man), and the "a woman"
can scope wide or narrow with respect to the distributive context.
What is the difference between a graph-based semantic representation and a logical one? At one level, any logical formula can be represented in graphical form, so it may look as though the difference is just one of visualization preferences. But there is perhaps a deeper distinction. One of the startling innovations of Frege's rebooting of logic was to provide a notation where scope could be separated out from argument structure. Prior to this, the representation of "John loves Mary" was obvious:
love(john,mary)
love(every(man), a(woman))
∀x. man(x) → ∃y. woman(y) & love(x,y)
Most computer scientists coming to modal logic nowadays see it as a special case of first order logic (quantify over possible worlds). I am inclined to explore the opposite view, and see first order logic as a special case of modal logic, where scope-inducing quantifiers are both rare and act like modal operators. Here is an initial sketch of how this might work.
--------------
Now you may be excited or apprehensive about this double role played by variables in first-order logic. If you grew up with Martin-Loef's dependent quantifiers, you may find hard to think of the meaning where all the men in the domain love the same woman: the dependent quantifier "for each man, there's a woman" might seem far more reasonable. But of course you can still express both.
You can also think about the vast grey area between propositional logic and first-order logic, where you're allowed variables, but no quantifiers. Where you can have dependencies between these variables. or not.
Or the situations where you have variables and they are only existentially quantified; or universally so.
Where skolemization makes sense or not. so many possibilities that only come to you when teaching "baby logic".
You can also think about the vast grey area between propositional logic and first-order logic, where you're allowed variables, but no quantifiers. Where you can have dependencies between these variables. or not.
Or the situations where you have variables and they are only existentially quantified; or universally so.
Where skolemization makes sense or not. so many possibilities that only come to you when teaching "baby logic".