Tuesday, April 7, 2026

A Recipe for Models, Told Twice

 Dr Andrea SchalkSchool of Computer Science at The University of Manchester

One of the pleasures of long mathematical collaborations is watching how the same ideas get told differently by different people. Andrea Schalk and I worked together on models of linear logic for some years, and the pair of papers I want to discuss today — one a conference extended abstract from AMAST 1999, the other its journal incarnation in Theoretical Computer Science in 2004 — illustrate this rather vividly. The mathematics is essentially the same. The sensibility is not.

The idea itself

The question we were trying to answer was: can you build a unified framework that explains why so many different models of linear logic — coherence spaces, hypercoherences, phase spaces, the category of relations — all work in structurally similar ways? And can you do it cleanly enough that you understand why they sometimes collapse (tensor equalling par, products equalling coproducts) and sometimes don't?


The answer turned out to be pleasingly simple. Start with Rel, the category of sets and relations, which is compact closed and serves as the ambient universe. Choose two ingredients: an endofunctor F on Rel (playing the role of "how you read the structure of an object") and a poset L (or P in the journal version) that is symmetric monoidal closed. 

Then consider pairs  (A, α) where A is a set and α : FA → L is a function — a "poset-valued set," or an L_F-set. Morphisms are relations R : A ⇸ B such that whenever x (FR) y, we have α(x) ≤ β(y). That's it. The poset L controls the categorical structure of the whole thing. 

  1. Coherence spaces? Take L = 3, the three-element poset {⌢ < 1 < ⌣} and F = the diagonal functor.
  2. Hypercoherences? Take L = 3 and F = the finite powerset functor.
  3. Phase spaces? F constant, L a complete lattice of "facts."
  4. The category Rel itself? Let L be a singleton. 
The tensor, the linear function space, the negation, the exponential modality, all arise by lifting the corresponding structure from L through F into the new category. The "collapse" (why tensor = par in coherence spaces, for instance) is explained: it happens because those operations coincide in 3.

The 1999 paper: my voice
The AMAST extended abstract has a particular flavour. It opens with the motivation from comparing Chu's construction with Dialectica categories — a very personal starting point for me, since the Dialectica categories were my PhD thesis work, and I had long been curious about how far you could get without built-in duality. The paper leans into the conceptual story: why are we doing this, what are we hoping to learn, what surprised us. There's a certain directness to it, a willingness to foreground the puzzle.

There's also a slightly unconventional technical choice that reflects this: the objects are called L_F-sets (with the poset coming first), signaling that the algebraic structure of L is the conceptual driver. The notation is a little idiosyncratic, even playful. The paper says "we are independently pursuing the work on these two classes of models" — meaning dialectica and games sit outside the framework, a gap openly acknowledged rather than quietly elided.

The 2004 paper: Andrea's voice
The TCS journal version is altogether more systematic. The objects are now called P_F-sets, a small notational shift but a telling one — P for poset, cleaner and more standard. The introduction is careful and measured. It lists exactly what the construction controls (classical versus intuitionistic, coincidence of products and coproducts, coincidence of tensor and par) as a precise taxonomy before saying anything else. The proofs are spelled out fully, including the ones the extended abstract wisely deferred to an appendix or left implicit. 

Andrea's exposition has a different quality of care — the kind that comes from wanting every step to be airtight and every claim to be precisely scoped. Where my version is somewhat conversational, hers is architectural. The section on generalizing away from Rel is notably honest: "our proof of symmetric monoidal closure does make use of the fact that the base category is compact closed, and we currently cannot see an obvious way of generalizing this."

Both versions thank Martin Hyland, which seems right.

What this illustrates

 Mathematical papers are not just records of results. They are also, inevitably, records of perspective — of what the author finds surprising, what they think needs explaining, where they think the edges of the work lie. The same theorem can be framed as the answer to a puzzle you've been carrying for years, or as a modular construction tool for future use. Neither framing is wrong. They illuminate different things.

Reading these two papers side by side, I'm struck by how much the conference-paper form suited the first one: the space constraint forced a clarity of motivation that sometimes gets diluted in full journal treatment. And I'm equally struck by how much Andrea's careful expansion added — the fill-in lemma becomes genuinely transparent in the longer version in a way it wasn't before.

Further directions

Both papers are honest about the limits of the construction. The two classes of models that don't fit — dialectica categories and games models — are flagged in both versions, and the 2004 paper additionally notes the difficulty of generalising away from Rel as the base category: the proof of symmetric monoidal closure leans on Rel being compact closed in an essential way, and it's not obvious how to relax that. This dependence always bothered me — compact closure of Rel means that tensor, par, and linear implication all coincide there, and that linear true equals linear false, which is precisely the kind of collapse we are trying to avoid. 

One direction that might seem worth exploring is replacing the poset L with a quantale — a complete lattice with a compatible monoid structure — which would merge the two separate assumptions on L into a single algebraic object. But the completeness of a quantale always felt artificial here: we are modelling propositional linear logic, and that much lattice-theoretic machinery seems like more than the situation calls for.

A brief note on connections

 The dialectica categories that motivated this work in the first place continue to sit just outside the framework — as the conclusions of both papers honestly note. That tension remains interesting to me: the P_F-set construction is "one-sided" in the sense that duality comes from L rather than from a built-in pairing, while Dialectica categories are irreducibly "two-sided." Whether there's a common generalisation that comfortably houses both is a question I haven't stopped thinking about.


The papers can be found at “[PDF]Poset-valued sets or how to build models for linear logics” and Building Models of Linear Logic.

 

 

 

Wednesday, April 1, 2026

The logic of using things exactly once — and why it matters

 What a 1999 gathering of logicians was quietly getting right about the future of computing

August 1999 · Dagstuhl Castle, Germany.

 In August 1999, about two dozen researchers gathered at Schloss Dagstuhl — a castle in the German countryside that has hosted some of the most productive meetings in computer science — for a seminar on Linear Logic and its Applications. The meeting was organized by Dick Crouch, Eike Ritter, Josef van Genabith, and me. Looking back, it is striking how many of the ideas debated there have since moved from "interesting theoretical curiosity" to genuine practical importance. 

 Linear Logic was introduced by Jean-Yves Girard in 1987. Its central idea is deceptively simple: instead of treating logical assumptions as facts that can be used freely and repeatedly, treat them as resources — things that get consumed when used. Want to use something twice? You have to say so explicitly. This turns out to be a surprisingly powerful lens for thinking about computation. 

"Linear Logic is a refinement of Classical Logic: using Girard's well-known translation we can investigate the usage of resources in proofs of traditional theorems."

One of the liveliest threads at the seminar concerned natural language. Several talks explored how the resource-sensitivity of Linear Logic maps beautifully onto the combinatorics of sentence meaning — the way words and phrases "combine" their meanings exactly once to produce the meaning of a whole sentence. This is the foundation of the glue semantics approach to natural language, which uses an implicational fragment of Linear Logic to assemble meanings compositionally. The core proof-search challenge — finding all distinct derivations efficiently, since each derivation corresponds to a distinct reading of an ambiguous sentence — anticipated concerns that are very much alive in today's work on parsing and semantic composition.

Another major theme was functional programming and memory management. The intuition is elegant: if a variable is used exactly once (a "linear" variable), then the object it refers to has exactly one pointer to it at any given time, which means it can safely be updated in place without copying. Several talks at the seminar explored how far this idea could be pushed — and where it runs into difficulties, particularly with higher-order functions. This line of thinking is now central to the design of modern systems programming languages. Rust's celebrated ownership and borrowing system is a direct descendant of these ideas, and linear types have since made their way into Haskell as well, a sign that the functional programming community has come to take resource-awareness seriously. 

The seminar also featured what became perhaps its most contentious debate: whether Linear Logic really is "the" logic of resources, or whether a rival system — the Logic of Bunched Implications, BI, proposed by Peter O'Hearn and David Pym — did a better job. The BI proponents argued that Linear Logic captured a process reading of resources (independent parallel computations) but not a fully satisfactory spatial reading (separate regions of memory). This dispute, which generated "the most controversy in the meeting" turned out to matter enormously: BI became the foundation of Separation Logic, which is now one of the most important frameworks for formally verifying the correctness of programs that manipulate memory. O'Hearn received the ACM Turing Award in 2023, in part for this work. 

Not every thread has yet reached its destination, and that is part of what makes this area exciting. Applications of Linear Logic to logic programming, planning, and concurrency have had a long and productive history, and there is no reason to think the story is finished — these are hard problems, and resource-aware reasoning keeps finding new angles on them. What makes a seminar like this valuable is precisely that ideas from different communities — proof theorists, linguists, programming language researchers, verification specialists — get to collide and cross-pollinate, with payoffs that can take years or decades to fully materialise.

One striking absence from the 1999 programme was Differential Linear Logic, introduced by Ehrhard and Regnier a few years later. By adding a notion of differentiation — in the calculus sense — to the logical framework, it opened a connection between proof theory and the mathematics of smooth functions and power series that nobody had anticipated. It gave a logical account of the "Taylor expansion" of proofs, and has since made contact with probabilistic programming, neural network semantics, and the theory of quantitative information flow. A quarter century on, Differential Linear Logic is still actively developing: the right categorical semantics, the proper treatment of higher types, and the connections to adjacent fields are not yet fully settled. It is a reminder that a logical idea, once it takes hold, can keep ramifying in unexpected directions long after the original seminar has ended.  

Twenty-five years later, it is fair to say that the core intuition of the 1999 meeting has aged very well. Thinking carefully about how resources are used — in memory, in concurrency, in meaning assembly, in the very structure of proofs — remains one of the most fertile ideas at the intersection of logic and computer science. Some of the seeds planted at Dagstuhl are still growing.

The original seminar report is available from Dagstuhl [here] 

 

Friday, March 27, 2026

What If Cantor Was Wrong?

A blog post about infinity, categories, and a mathematical mystery.

Cantor showed that not all infinities are equal: the infinity of the natural numbers (ℵ₀) is genuinely smaller than the infinity of the real numbers (called c, the continuum). He then asked what seems like a natural follow-up: is there anything in between? His conjecture that the answer is no became known as the Continuum Hypothesis (CH).

He spent years trying to prove it. He failed. He had a breakdown. He died without knowing the answer. As it turned out, there was a good reason: Gödel (1940) and Cohen (1963) together showed that CH is neither provable nor disprovable from the standard axioms of mathematics. It is, in a precise sense, independent of everything we build mathematics on.

This is, philosophically speaking, quite alarming. But also liberating.

So Let's Assume It's False

If CH is false, there are cardinals strictly between aleph_0 and c (all of them infinite, of course). Set theorists have identified dozens of these that arise naturally in analysis and topology — measuring things like how many functions you need to dominate all others, or how many small sets it takes to cover the real line. These are called cardinal invariants of the continuum, or small cardinals, and their relationships to each other form a rich landscape, summarised in the beautiful Cichoń's Diagram.

This is where my friend Samuel Gomes da Silva comes in. Samuel is a set theorist at the Federal University of Bahia in Brazil, thinking hard about these cardinals and how to prove inequalities between them. I am a category theorist — my home territory is Dialectica categories, which I introduced in my PhD thesis in the late 1980s as a categorical version of Gödel's Dialectica interpretation. They turned out, somewhat surprisingly, to be models of Linear Logic — but that was not the original plan. Even more surprisingly, these categories had found an exciting connection to set theory through the insight of Andreas Blass, but no one had fully explained why.

Samuel and I started talking, and realised we were looking at the same mountain from opposite sides. Our paper — Dialectica Categories, Cardinalities of the Continuum, and Combinatorics of Ideals — is the attempt to build a path between those two viewpoints.

The Mystery We Were Trying to Solve

Here is the core puzzle, articulated beautifully by Blass himself:

"It is an empirical fact that proofs of inequalities between cardinal characteristics of the continuum usually proceed by representing the characteristics as norms of objects in PV and then exhibiting explicit morphisms between those objects."

Translation for non-specialists: there is a category (a mathematical structure of objects and arrows between them) — closely related to my Dialectica categories — such that almost every  known proof of an inequality between small cardinals can be encoded as a morphism in that category. This is not a theorem anyone had proved. It was just... observed. An empirical fact. A pattern that kept showing up.

Blass pointed it out. Everyone nodded. Everyone just kept using the pattern without asking why it worked. Samuel and I found that strange: we felt that empirical facts deserve explanations.

The Key Idea (Gently)

The category in question — called PV, after de Paiva and Vojt́áš — has objects that are triples: a set of "problems," a set of "solutions," and a relation saying which solutions solve which problems. A morphism between two such objects is a way of translating problems and solutions between them, kind of preserving the solving relationship.

The norm of such an object — a cardinal number naturally associated to it — turns out to be: the minimum number of solutions needed to cover all problems. That is an "Abelard and Eloise cardinal," as we fondly called them in the paper, because their definitions always have the shape "for all... there exists..." — the classic back-and-forth of a logical argument, or a medieval philosophical debate.

The beautiful discovery is that most of the small cardinals that set theorists care about — b, d, add, cov, non, cof — can all be expressed as norms of natural objects in PV. And when one norm is smaller than another, there is (almost always) a morphism that witnesses this, in a constructive and categorical way.

In our paper, we showed this works beautifully for pre-orders and for ideals (a notion of "smallness" for collections of sets). For any reasonable notion of "bounded" in a pre-order, the associated cardinal invariants — how many bounded sets you need to cover everything, how large an unbounded family must be — all fall out as norms of Dialectica objects. The morphisms between them encode exactly the "witness-choosing" arguments that set theorists had been writing out by hand for decades.

What We Conjectured 

We proposed four conjectural principles in the paper. The most important, roughly speaking, are:
   1. Most natural cardinal invariants in set theory are "Abelard and Eloise" cardinals — they can be expressed as norms of objects in Dialectica.
    2. Pre-orders are the archetypal case — understanding b(P) and d(P) for a pre-order P gives you the right intuition for all the others.
    3. Any reasonable proof of an inequality between such cardinals, even one written without thinking about categories, secretly encodes a morphism — the morphism is hiding in the "witness-choosing" steps.
    4. The deepest explanation will ultimately be categorical — there is something about the structure of Dialectica categories that makes them the natural home for this kind of mathematics.

We were honest in the paper: we hadn't fully answered our Main Question. We still haven't. But we had identified the right question to ask, which is, in mathematics, sometimes most of the work.

A Personal Note

I was not supposed to care about cardinal invariants. I was a categorical logician, interested in the internal structure of proofs, in models of computation, in the geometry of inference. The idea that the categories I built to model a fragment of logic would end up being the natural language for a corner of set theory about the size of infinity — that was not in my plan.

But that is what mathematics does. It finds its own connections across whatever disciplinary boundaries we have invented. Samuel and I came from different mathematical cultures, spoke different technical dialects, and cared about different problems. And yet the mathematics insisted that we were talking about the same thing.

That, perhaps, is the real moral of the Continuum Hypothesis: even a question we cannot answer has rich and surprising structure. Even undecidability is fertile ground. We might not know if there's something between ℵ₀ and c. But if there is, we're starting to understand what it looks like.

---------------------------------------------------------------------------------------
The paper discussed in this post is: Samuel G. da Silva and Valeria C. V. de Paiva, "Dialectical categories, cardinalities of the continuum and combinatorics of ideals," Logic Journal of the IGPL, Vol. 25, No. 4, 2017. Yes, there’s a typo in the name of the paper, it should be Dialectica, not Dialectical, this is what the errata is about—only this.

Tuesday, March 17, 2026

Constructive modal Logics: Again

                                             Gavin Bierman

Over the last thirty years, I have repeatedly returned to a particular question in logic:

 

What should modal logic look like in a constructive world?

 

This question first appeared in my work in the early 1990s and has since shaped 

a series of papers, collaborations, workshops, and talks. Looking back, it is clear 

that these efforts form a small research lineage centered on the interaction between 

modality and constructive logic, especially intuitionistic logic.

 

I thought it might be useful to record a bit of that history.

Early motivations

In the early 1990s, modal logic was overwhelmingly studied in classical settings. 

But many areas that interested me—proof theory, type theory, and categorical 

semantics—are fundamentally constructive. This was a time when new connections 

between logic, computation, and category theory were being actively explored. 

Developments such as linear logic and the growing use of categorical semantics 

made it increasingly natural to ask how modal operators should behave in 

constructive settings.

 

Working constructively changes the landscape dramatically. Logical principles that 

are straightforward classically may behave very differently when one insists on 

intuitionistic reasoning. This becomes especially visible when modal operators are 

introduced. One cannot simply take a classical modal system and reinterpret it 

intuitionistically; the interaction with implication, proof structure, and semantics 

becomes much more delicate.

 

Understanding these interactions became the starting point of my work on 

intuitionistic modal logics. One of the first results of this effort was a paper 

(Intuitionistic Necessity Revisited) written in 1992 with Gavin Bierman and 

presented by him at the Logic at Work Conference in Amsterdam in Dec 1992. 

The work was later published as On an Intuitionistic Modal Logic 

(Studia Logica, 2000).

 

That paper explored how a modal operator could be integrated into intuitionistic logic 

while preserving the constructive character of the system. One quickly discovers that 

the design space is surprisingly large: small changes in the interaction between modal 

operators and implication can produce quite different logical systems. For this reason, 

the paper did not merely introduce a particular system, but also aimed to clarify the 

choices involved in designing constructive modal logics, questions that later became 

central to much of the subsequent work in the area.

 

The resulting system was presented through several complementary proof-theoretic 

formulations, including a Hilbert system, a sequent calculus, and a natural deduction 

system, which were shown to be equivalent. However, as in the case of

intuitionistic linear logic, constructing a natural deduction system that properly 

supports a Curry–Howard correspondence turns out to be subtle. The key issue is 

the preservation of substitution: constraints that are invisible at the level of the 

sequent calculus become essential in Natural Deduction: your ND proof trees need to 

compose. This phenomenon has direct consequences for the categorical semantics 

(which is the main point for me!). Ensuring that substitution behaves correctly 

corresponds to enforcing coherence conditions, which in practice appear as commuting 

conversions or require syntactic refinements such as dual-context systems. From this 

perspective, the proof theory and the semantics are tightly linked: the difficulties in 

designing natural deduction systems reflect precisely the structural constraints needed 

for a well-behaved categorical model.

 

Perhaps more importantly, the work highlighted that the design of constructive modal 

logics inevitably involves a number of nontrivial choices. Different assumptions about 

the interaction of □ and ◇ with logical connectives, lead to different but equally 

reasonable systems. Recognizing this helped reveal the broader landscape of 

intuitionistic modal logics, a landscape that was explored in Alex Simpson’s doctoral 

thesis  The Proof Theory and Semantics of Intuitionistic Modal Logic (1994).

Design choices in constructive modality

One lesson that emerged early on is that constructive modal logic is not a single 

system but rather a family of possible systems. When working constructively one must 

decide, for example:

 

  • how modal operators interact with implication

  • which modal principles remain valid intuitionistically

  • how proofs involving modalities should be structured

  • what kind of semantics best captures the intended meaning

     

Different answers to these questions lead to different logics, each suitable for different 

applications. Exploring this design space became an ongoing theme of my work and 

collaborations.

Workshops and collaborations

Over the years, I organized several workshops and participated in many discussions 

devoted to constructive modal logics. These meetings brought together researchers 

from logic, type theory, and computer science, and they helped shape the direction of

the field. Much of the progress in understanding these systems emerged from these 

exchanges: comparing proof systems, refining semantics, and exploring connections 

with programming languages and categorical logic. From time to time, I have also 

returned to these questions in blog posts as well as papers. Some of these posts 

reflect on the motivations behind constructive modal logics and the choices that arise 

when designing them:

 

Each of these revisits the same central theme from a slightly different perspective.

Over the years this line of work expanded well beyond a single logic or a single paper. 

The blog post “Papers about constructive modal logics” collects seventeen papers 

written across decades exploring different aspects of modality in constructive settings. 

While many of them deal directly with intuitionistic modal logics, others investigate 

related modal formalisms including hybrid logics, temporal logics, and description 

logics.

 

What unifies these papers is not a single system but a shared perspective: 

understanding how modal notions interact with constructive reasoning, and how 

these interactions affect proof theory, semantics, and applications in computer science. 

Seen together, these papers trace a research program that gradually explored the 

many different ways modality can be incorporated into constructive logical 

frameworks.

Looking forward

The interaction between modality and constructive reasoning continues to be rich and 

subtle. Modal operators appear naturally in many areas connected with computation 

and semantics, and intuitionistic logic remains the natural framework for many of these 

applications. As a result, the questions that motivated the early work have not 

disappeared. If anything, they have become even more relevant as connections 

between logic, type theory, and category theory continue to deepen.

 

This line of work also continues through the community that has formed around 

intuitionistic modal logic. Together with Sonia Marin, I am currently organizing another 

IMLA — Intuitionistic Modal Logic and Applications 2026 workshop, which will take place as 

part of the Federated Logic Conference (FLoC) 2026 in Lisbon this July. We hope the 

workshop will bring together many of the different strands of research in the area—

proof theory, semantics, type theory, and applications—and that it will showcase 

several of the traditions that have developed around intuitionistic and constructive 

modal logics over the years.

 

Looking back, it is satisfying to see how a question first explored in the early 1990s 

grew into a sustained line of work involving many people and ideas. Constructive 

modal logics remain a lively area of research, and the design space they reveal is still

far from fully explored.