# Coincidence lemma (first-order logic)

### From Learning Logic for Computer Science

Given a structure, the truth value of a first-order formula under an assignment depends (at most) on the values of the variables occurring free in this formula.

**Example**
In the formula $\exists x_0 (f(x_0) \doteq x_1)$ the truth value of the formula under an assignment (in a given structure) does not depend on the value of $x_0$, because every occurrence of $x_0$ is in the scope of a quantification of $x_0$, but it very much depends on the value of $x_1$.

**Example**
A more complicated example is $\exists x_0 f(x_0) \doteq x_1 \wedge x_0 \doteq g(x_1)$. Here, there are two occurrences of $x_0$: the first one is in the scope of a quantification, but the second one is not. So the truth value of the formula depends on the values of $x_0$ *and* $x_1$.

## Contents

# Coincidence lemma

## Formal statement

**Lemma**
Let $\mathcal S$ be a signature, $\varphi$ a first-order formula over $\mathcal S$, $\mathcal A$ an $\mathcal S$-structure, and $\beta_0$ and $\beta_1$ be variable assignments. If $\beta_0\vert_{\text{fvars}(\varphi)} = \beta_1\vert_{\text{fvars}(\varphi)}$, then
\[ \llbracket\varphi\rrbracket_{\beta_0} = \llbracket\varphi\rrbracket_{\beta_1} \enspace.\]

## Proof preparation

Clearly, this statement calls for a proof by induction! But what are, presumably, the crucial points?—First, we will need a useful statement about terms in this context, because formulas are made up from terms. Second, the only interesting case in the definition of free variables is quantification, and the semantics of quantification is tightly connected with variable assignments. So we will need a useful statement about variable assignments and their modifications.

**Lemma**
For every $\mathcal S$-terms $t$, if $\beta_0\vert_{\text{vars}(t)} = \beta_1\vert_{\text{vars}(t)}$, then $\llbracket t \rrbracket_{\beta_0}^\mathcal A = \llbracket t \rrbracket_{\beta_1}^\mathcal A$.

**Lemma**
Let $\beta_0$ and $\beta_1$ be $\mathcal A$-assignments and $M \subseteq \{x_0, x_1, \dots\}$ such that $\beta_0 \vert_M = \beta_1 \vert_M$. For every $a \in A$ and $i \in \mathbf N$,
\begin{align}
\beta_0 \left[\frac {x_i} a\right] \vert_{M \cup \{x_i\} } =
\beta_1 \left[\frac {x_i} a\right] \vert_{M \cup \{x_i\} } \enspace.
\end{align}

## Proof of the lemma

**Base cases.**
There are three cases to consider for the base step. We only discuss one of them.

Assume $\varphi = R(t_0, \dots, t_{n-1})$. From $\beta_0\vert_{\text{fvars}(\varphi)} = \beta_1\vert_{\text{fvars}(\varphi)}$, we conclude \begin{align} \beta_0\vert_{\text{fvars}(t_i)} = \beta_1\vert_{\text{fvars}(t_i)} \enspace, && \text{for $i<n$,} \end{align} because $\text{vars}(t_i) \subseteq \text{fvars}(\varphi)$ by definition of $\text{fvars}$.

From the first lemma above, we conclude \begin{align} \tag{*} \llbracket t_i \rrbracket_{\beta_0} = \llbracket t_i \rrbracket_{\beta_1} \enspace, && \text{for $i<n$.} \end{align}

We conclude: \begin{align*} \mathcal A, \beta_0 \mymodels \varphi & \text{ iff } \langle \llbracket t_0 \rrbracket_{\beta_0}^\mathcal A, \dots, \llbracket t_{n-1} \rrbracket_{\beta_0}^\mathcal A\rangle \in R^\mathcal A && \text{def. of semantics and $\varphi$}\\ & \text{ iff } \langle \llbracket t_0 \rrbracket_{\beta_1}^\mathcal A, \dots, \llbracket t_{n-1} \rrbracket_{\beta_1}^\mathcal A\rangle \in R^\mathcal A && \text{(*)}\\ & \text{ iff } \mathcal A, \beta_1 \mymodels \varphi \enspace. && \text{def. of semantics and $\varphi$} \end{align*}

**Inductive step.**
There are two cases two consider. We only discuss (existential) quantification.

Assume $\varphi = \exists x_i \psi$. The inductive assumption is that the coincidence lemma holds with $\psi$ instead of $\varphi$. The premise of the lemma is $\beta_0 \vert_{\text{fvars}(\varphi)} = \beta_1 \vert_{\text{fvars}(\varphi)}$. From this premise, we conclude \begin{align} \beta_0 \vert_{\text{fvars}(\psi) \setminus \{x_i\}} = \beta_1 \vert_{\text{fvars}(\psi) \setminus \{x_i\}} \enspace. \end{align} So, by the above lemma, for every $a \in A$, \begin{align} \tag{**} \beta_0 \left[\frac {x_i} a\right]\vert_{\text{fvars}(\psi)} = \beta_1 \left[\frac {x_i} a\right]\vert_{\text{fvars}(\psi)} \enspace. \end{align}

We have: \begin{align} \mathcal A, \beta_0 \mymodels \exists x_i \psi & \text{ iff } \text{there exists $a \in A$ such that } \mathcal A, \beta_0\left[\frac {x_i} a\right] \mymodels \psi && \text{def. of semantics}\\ & \text{ iff } \text{there exists $a \in A$ such that } \mathcal A, \beta_1\left[\frac {x_i} a\right] \mymodels \psi && \text{(**) and inductive assumption}\\ & \text{ iff } \mathcal A, \beta_1 \mymodels \exists x_i \psi \enspace. && \text{def. of semantics} \end{align}

This concludes the proof (for the cases discussed).

# Applications

The coincidence lemma justifies to work with partial variable assignments: when a formula is evaluated in a structure under an assignment the only values that need to be known and thus defined are the ones for the free variables.

The coincidence lemma justifies the use of formulas for defining relations.