Skolemization and Why It Fails Constructively
There is a standard move in classical logic that feels so innocent.
You take a formula with existential quantifiers depending on universals, and you replace those existential witnesses with function symbols. You eliminate existential quantifiers altogether, at the price of expanding the language. The result is equisatisfiable, often easier to manipulate, and forms the backbone of many proof procedures.
The move is called Skolemization.
In classical logic, it works beautifully.
In constructive logic—more precisely, in intuitionistic logic—it does not.
Or rather: it does not work in general. And understanding why it fails, and when it still works, seems to be more subtle than one might expect.
This is the labyrinth.
The classical thread
Very roughly, Skolemization transforms a formula like
into something like
Or, more commonly in proof theory, we simply replace the existential with a Skolem function symbol and consider
Classically, this is justified because truth is about existence, not construction. If for every there exists some then there is no harm in imagining a function that picks one such for each . The logic does not ask us to compute , only to accept its existence.
This step is so standard that one almost forgets it is a step.
The constructive issue
From a constructive point of view, however,
is not just a statement about existence—it is a statement about the ability to produce a from an .
To replace this with a function f is to claim something stronger: that there is a uniform method assigning to each a specific witness .
And this is not, in general, justified.
The gap is subtle but real. A constructive proof of
may proceed in ways that do not yield a single, global function . The witnesses may depend on the structure of the proof, on case distinctions, or on information that is not uniformly available.
In other words: the existential quantifier may be locally justified without giving rise to a global choice function.
Skolemization silently inserts that global choice.
This is where it breaks.
A conversation
Many years ago, I asked Grigori Mints about this.
He had written about Skolemization and he was "the''expert in intuitionistic logic, and I was curious whether the problem had, in some sense, been solved. Whether there was a clean constructive analogue of the classical transformation, a best formulation for the constraints under which we have the ability to skolemize constructively.
His answer was immediate and unequivocal:
No—Skolemization does not work for intuitionistic logic.
But, he added, it is worth understanding when it does work, and why.
When does it work?
This is where things become interesting.
There are situations in which something like Skolemization is admissible constructively. But these tend to come with extra structure, extra assumptions, or restrictions on the form of the formula.
For instance (and here I am deliberately being schematic rather than definitive):
- When the proof already contains enough uniformity to extract a function.
- When the domain over which ranges has special properties (e.g., decidability conditions).
- When one works not in pure intuitionistic logic, but in systems enriched with choice principles.
- When the formulas involved are restricted (e.g., certain fragments where realizability guarantees uniform witnesses).
In these cases, the thread appears: one can pass from “for all , there exists a ” to something function-like. But the thread is not given for free. It has to be constructed.
What is the real issue?
What seems to be at stake is not just Skolemization itself, but the relationship between:
- existence and construction,
- local witnesses and global functions,
- proofs and programs.
From the perspective of the Curry–Howard correspondence, this becomes even sharper. A proof of
corresponds to a function that, given , returns not just a , but a pair
So why isn’t that already a Skolem function?
This is exactly the point where my understanding becomes less clear.
Is the issue intensionality? Is it about the structure of proofs versus the structure of terms? Is it about normalization, or about the admissibility of certain transformations on proofs?
Or is it simply that Skolemization, as usually formulated, forgets too much of the computational content?
Not a conclusion
I do not have a clean answer.
What I have instead is a sense that Skolemization marks a boundary: a place where classical and constructive reasoning diverge in a way that is not merely technical, but conceptual.
Understanding that boundary—mapping where the transformation fails, where it can be repaired, and what is lost or gained in the process—seems to me a worthwhile problem.
This post is not a solution.
It is a piece of thread.
Some references:
1. (Russian) The Skolem method in intuitionistic calculi, G. E. Mints, 1972.
2. The Skolemization of prenex formulas in intermediate logics, Rosalie Iemhoff, 2019.
3. The Skolemization of existential quantifiers in intuitionistic logic, Matthias Baaz, Rosalie Iemhoff, APAL, 2006


No comments:
Post a Comment