Tuesday, April 7, 2026

A Recipe for Models, Told Twice

One of the pleasures of long mathematical collaborations is watching how the same ideas get told differently by different people. Andrea Schalk and I worked together on models of linear logic for some years, and the pair of papers I want to discuss today — one a conference extended abstract from AMAST 1999, the other its journal incarnation in Theoretical Computer Science in 2004 — illustrate this rather vividly. The mathematics is essentially the same. The sensibility is not.

The idea itself

The question we were trying to answer was: can you build a unified framework that explains why so many different models of linear logic — coherence spaces, hypercoherences, phase spaces, the category of relations — all work in structurally similar ways? And can you do it cleanly enough that you understand why they sometimes collapse (tensor equalling par, products equalling coproducts) and sometimes don't?


The answer turned out to be pleasingly simple. Start with Rel, the category of sets and relations, which is compact closed and serves as the ambient universe. Choose two ingredients: an endofunctor F on Rel (playing the role of "how you read the structure of an object") and a poset L (or P in the journal version) that is symmetric monoidal closed. 

Then consider pairs  (A, α) where A is a set and α : FA → L is a function — a "poset-valued set," or an L_F-set. Morphisms are relations R : A ⇸ B such that whenever x (FR) y, we have α(x) ≤ β(y). That's it. The poset L controls the categorical structure of the whole thing. 

  1. Coherence spaces? Take L = 3, the three-element poset {⌢ < 1 < ⌣} and F = the diagonal functor.
  2. Hypercoherences? Take L = 3 and F = the finite powerset functor.
  3. Phase spaces? F constant, L a complete lattice of "facts."
  4. The category Rel itself? Let L be a singleton. 
The tensor, the linear function space, the negation, the exponential modality, all arise by lifting the corresponding structure from L through F into the new category. The "collapse" (why tensor = par in coherence spaces, for instance) is explained: it happens because those operations coincide in 3.

The 1999 paper: my voice
The AMAST extended abstract has a particular flavour. It opens with the motivation from comparing Chu's construction with Dialectica categories — a very personal starting point for me, since the Dialectica categories were my PhD thesis work, and I had long been curious about how far you could get without built-in duality. The paper leans into the conceptual story: why are we doing this, what are we hoping to learn, what surprised us. There's a certain directness to it, a willingness to foreground the puzzle.

There's also a slightly unconventional technical choice that reflects this: the objects are called L_F-sets (with the poset coming first), signaling that the algebraic structure of L is the conceptual driver. The notation is a little idiosyncratic, even playful. The paper says "we are independently pursuing the work on these two classes of models" — meaning dialectica and games sit outside the framework, a gap openly acknowledged rather than quietly elided.

The 2004 paper: Andrea's voice
The TCS journal version is altogether more systematic. The objects are now called P_F-sets, a small notational shift but a telling one — P for poset, cleaner and more standard. The introduction is careful and measured. It lists exactly what the construction controls (classical versus intuitionistic, coincidence of products and coproducts, coincidence of tensor and par) as a precise taxonomy before saying anything else. The proofs are spelled out fully, including the ones the extended abstract wisely deferred to an appendix or left implicit. 

Andrea's exposition has a different quality of care — the kind that comes from wanting every step to be airtight and every claim to be precisely scoped. Where my version is somewhat conversational, hers is architectural. The section on generalizing away from Rel is notably honest: "our proof of symmetric monoidal closure does make use of the fact that the base category is compact closed, and we currently cannot see an obvious way of generalizing this."

Both versions thank Martin Hyland, which seems right.

What this illustrates

 Mathematical papers are not just records of results. They are also, inevitably, records of perspective — of what the author finds surprising, what they think needs explaining, where they think the edges of the work lie. The same theorem can be framed as the answer to a puzzle you've been carrying for years, or as a modular construction tool for future use. Neither framing is wrong. They illuminate different things.

Reading these two papers side by side, I'm struck by how much the conference-paper form suited the first one: the space constraint forced a clarity of motivation that sometimes gets diluted in full journal treatment. And I'm equally struck by how much Andrea's careful expansion added — the fill-in lemma becomes genuinely transparent in the longer version in a way it wasn't before.

Further directions

Both papers are honest about the limits of the construction. The two classes of models that don't fit — dialectica categories and games models — are flagged in both versions, and the 2004 paper additionally notes the difficulty of generalising away from Rel as the base category: the proof of symmetric monoidal closure leans on Rel being compact closed in an essential way, and it's not obvious how to relax that. This dependence always bothered me — compact closure of Rel means that tensor, par, and linear implication all coincide there, and that linear true equals linear false, which is precisely the kind of collapse we are trying to avoid. 

One direction that might seem worth exploring is replacing the poset L with a quantale — a complete lattice with a compatible monoid structure — which would merge the two separate assumptions on L into a single algebraic object. But the completeness of a quantale always felt artificial here: we are modelling propositional linear logic, and that much lattice-theoretic machinery seems like more than the situation calls for.

A brief note on connections

 The dialectica categories that motivated this work in the first place continue to sit just outside the framework — as the conclusions of both papers honestly note. That tension remains interesting to me: the P_F-set construction is "one-sided" in the sense that duality comes from L rather than from a built-in pairing, while Dialectica categories are irreducibly "two-sided." Whether there's a common generalisation that comfortably houses both is a question I haven't stopped thinking about.


The papers can be found at “[PDF]Poset-valued sets or how to build models for linear logics” and Building Models of Linear Logic.

 

 

 

Wednesday, April 1, 2026

The logic of using things exactly once — and why it matters

 What a 1999 gathering of logicians was quietly getting right about the future of computing

August 1999 · Dagstuhl Castle, Germany.

 In August 1999, about two dozen researchers gathered at Schloss Dagstuhl — a castle in the German countryside that has hosted some of the most productive meetings in computer science — for a seminar on Linear Logic and its Applications. The meeting was organized by Dick Crouch, Eike Ritter, Josef van Genabith, and me. Looking back, it is striking how many of the ideas debated there have since moved from "interesting theoretical curiosity" to genuine practical importance. 

 Linear Logic was introduced by Jean-Yves Girard in 1987. Its central idea is deceptively simple: instead of treating logical assumptions as facts that can be used freely and repeatedly, treat them as resources — things that get consumed when used. Want to use something twice? You have to say so explicitly. This turns out to be a surprisingly powerful lens for thinking about computation. 

"Linear Logic is a refinement of Classical Logic: using Girard's well-known translation we can investigate the usage of resources in proofs of traditional theorems."

One of the liveliest threads at the seminar concerned natural language. Several talks explored how the resource-sensitivity of Linear Logic maps beautifully onto the combinatorics of sentence meaning — the way words and phrases "combine" their meanings exactly once to produce the meaning of a whole sentence. This is the foundation of the glue semantics approach to natural language, which uses an implicational fragment of Linear Logic to assemble meanings compositionally. The core proof-search challenge — finding all distinct derivations efficiently, since each derivation corresponds to a distinct reading of an ambiguous sentence — anticipated concerns that are very much alive in today's work on parsing and semantic composition.

Another major theme was functional programming and memory management. The intuition is elegant: if a variable is used exactly once (a "linear" variable), then the object it refers to has exactly one pointer to it at any given time, which means it can safely be updated in place without copying. Several talks at the seminar explored how far this idea could be pushed — and where it runs into difficulties, particularly with higher-order functions. This line of thinking is now central to the design of modern systems programming languages. Rust's celebrated ownership and borrowing system is a direct descendant of these ideas, and linear types have since made their way into Haskell as well, a sign that the functional programming community has come to take resource-awareness seriously. 

The seminar also featured what became perhaps its most contentious debate: whether Linear Logic really is "the" logic of resources, or whether a rival system — the Logic of Bunched Implications, BI, proposed by Peter O'Hearn and David Pym — did a better job. The BI proponents argued that Linear Logic captured a process reading of resources (independent parallel computations) but not a fully satisfactory spatial reading (separate regions of memory). This dispute, which generated "the most controversy in the meeting" turned out to matter enormously: BI became the foundation of Separation Logic, which is now one of the most important frameworks for formally verifying the correctness of programs that manipulate memory. O'Hearn received the ACM Turing Award in 2023, in part for this work. 

Not every thread has yet reached its destination, and that is part of what makes this area exciting. Applications of Linear Logic to logic programming, planning, and concurrency have had a long and productive history, and there is no reason to think the story is finished — these are hard problems, and resource-aware reasoning keeps finding new angles on them. What makes a seminar like this valuable is precisely that ideas from different communities — proof theorists, linguists, programming language researchers, verification specialists — get to collide and cross-pollinate, with payoffs that can take years or decades to fully materialise.

One striking absence from the 1999 programme was Differential Linear Logic, introduced by Ehrhard and Regnier a few years later. By adding a notion of differentiation — in the calculus sense — to the logical framework, it opened a connection between proof theory and the mathematics of smooth functions and power series that nobody had anticipated. It gave a logical account of the "Taylor expansion" of proofs, and has since made contact with probabilistic programming, neural network semantics, and the theory of quantitative information flow. A quarter century on, Differential Linear Logic is still actively developing: the right categorical semantics, the proper treatment of higher types, and the connections to adjacent fields are not yet fully settled. It is a reminder that a logical idea, once it takes hold, can keep ramifying in unexpected directions long after the original seminar has ended.  

Twenty-five years later, it is fair to say that the core intuition of the 1999 meeting has aged very well. Thinking carefully about how resources are used — in memory, in concurrency, in meaning assembly, in the very structure of proofs — remains one of the most fertile ideas at the intersection of logic and computer science. Some of the seeds planted at Dagstuhl are still growing.

The original seminar report is available from Dagstuhl [here]