Coincidence lemma (propositional logic)

From Learning Logic for Computer Science

The truth value of a propositional formula under a given propositional variable assignment depends only on the variables occurring in the formula, but not the others (and the formula itself).


A formal statement to this effect needs some preparation.

Definition The variables occurring in a propositional formula are defined inductively as a function $\text{vars} \colon F_\text{PL} \to \wp(\text{PVars})$.

Base mapping:

  • $\text{vars}(\bot) = \text{vars}(\top) = \emptyset$.
  • $\text{vars}(X_i) = \{X_i\}$ for every $i \in \mathbf N$.

Inductive rule:

  • If $C$ is an $n$-ary connective and $\varphi_0, \dots, \varphi_{n-1}$ are propositional formulas, then $\text{vars}(C(\varphi_0, \dots, \varphi_{n-1})) = \bigcup_{i<n} \text{vars}(\varphi_i)$.

Example $\text{vars}(X_3 \leftrightarrow \neg X_2) = \{X_2, X_3\}$.

A consequence of this is:

(*) If $C$ is an $n$-ary connective and $\varphi_0, \dots, \varphi_{n-1}$ are propositional formulas, then \[\text{vars}(\varphi_i) \subseteq \text{vars}(C(\varphi_0, \dots, \varphi_{n-1}))\enspace.\]

Coincidence lemma

Formal statement

Lemma Let $\beta_0$ and $\beta_1$ be propositional variable assignments satisfying $\beta_0\vert_{\text{vars}(\varphi)} = \beta_1\vert_{\text{vars}(\varphi)}$. Then \[ \llbracket\varphi\rrbracket_{\beta_0} = \llbracket\varphi\rrbracket_{\beta_1} \enspace.\]

Proof The proof of this statement is by induction. For this inductive proof, we fix variable assignments $\beta_0$ and $\beta_1$. The claim is: For every propositional formula $\varphi$, if $ {\beta_0}{{|}}_{ \text{vars}(\varphi) } = \beta_1{{|}}_{ \text{vars}(\varphi) }$, then $\llbracket\varphi\rrbracket_{\beta_0} = \llbracket\varphi\rrbracket_{\beta_1}$.

Base cases:

  • For $\varphi = \bot$, we have:

\begin{align} \llbracket\bot\rrbracket_{\beta_0} & = 0 && \text{base case in inductive def. of $\llbracket \rrbracket$}\\ & =\llbracket\bot\rrbracket_{\beta_1} && \text{base case in inductive def. of $\llbracket \rrbracket$} \end{align}

  • For $\varphi = \top$ the argument is dual.
  • For $\varphi = X_i$ for some $i$, we have:

\begin{align} \llbracket X_i\rrbracket_{\beta_0} & = \beta_0(X_i) && \text{base case in inductive def. of $\llbracket \rrbracket$}\\ & = \beta_1(X_i) && \text{$\text{vars}(X_i) = \{X_i\} $}\\ & = \llbracket X_i\rrbracket_{\beta_1} && \text{base case in inductive def. of $\llbracket \rrbracket$} \enspace. \end{align}

Inductive step: this is more complicated. One thing to notice ahead is the following.

(**) Assume $V_0 \subseteq V_1 \subseteq \text{PVars}$ holds. If $\beta_0\vert_{V_1} = \beta_1\vert_{V_1}$, then $\beta_0\vert_{V_0} = \beta_1\vert_{V_0}$.

The real inductive step is now as follows.

Let $\varphi = C(\varphi_0, \dots, \varphi_{n-1})$ where $C$ is an $n$-ary connective and $\varphi_0, \dots, \varphi_{n-1}$ are propositional formulas.

The inductive assumption is that the following holds for every $i < n$: if $\beta_0\vert_{\text{vars}(\varphi_i)} = \beta_1\vert_{\text{vars}(\varphi_i)}$, then $\llbracket\varphi_i\rrbracket_{\beta_0} = \llbracket\varphi_i\rrbracket_{\beta_1}$.

Now, assume $\beta_0\vert_{\text{vars}(\varphi)} = \beta_1\vert_{\text{vars}(\varphi)}$.

Using (*) we obtain $\text{vars}(\varphi_i) \subseteq \text{vars}(\varphi)$ for every $i<n$. From (**) we can then conclude $\beta_0\vert_{\text{vars}(\varphi_i)} = \beta_1\vert_{\text{vars}(\varphi_i)}$ for every $i<n$. The inductive assumption yields:

(***) $\llbracket\varphi_i\rrbracket_{\beta_0} = \llbracket\varphi_i\rrbracket_{\beta_1}$ for every $i<n$.

The rest is straightforward:

\begin{align} \llbracket\varphi\rrbracket_{\beta_0} & = \llbracket C(\varphi_0, \dots, \varphi_{n-1})\rrbracket_{\beta_0} && \text{definition of $\varphi$}\\ & = f_C(\llbracket\varphi_0\rrbracket_{\beta_0}, \dots, \llbracket\varphi_{n-1}\rrbracket_{\beta_0}) && \text{inductive rule in def. of $\llbracket\rrbracket$}\\ & = f_C(\llbracket\varphi_0\rrbracket_{\beta_1}, \dots, \llbracket\varphi_{n-1}\rrbracket_{\beta_1}) && \text{(***)}\\ & = \llbracket C(\varphi_0, \dots, \varphi_{n-1})\rrbracket_{\beta_1} && \text{inductive rule in the def. of $\llbracket\rrbracket$}\\ & = \llbracket\varphi\rrbracket_{\beta_1} && \text{definition of $\varphi$} \end{align}

Partial assignments

From the Coincidence lemma we conclude: an assignment does not need to be specified fully in order to be able to evaluate a formula under it. More precisely, if $\varphi$ is a propositional formula and the values of $\beta$ for the elements of $\text{vars}(\varphi)$ are known, then $\llbracket\varphi\rrbracket_\beta$ can be determined.

That is why partial assignments $\text{PVars} \colon \not\to \{0,1\}$ are allowed to be used in contexts where, according to the formal definition, ordinary assignments are required. It is, however, important that the partial assignments are defined in the right places for the expressions to be well-defined.

We say a partial assignment $\beta$ fits a formula $\varphi$ if $\text{vars}(\varphi) \subseteq \text{dom}(\beta)$. We say a partial assigment $\beta$ tightly fits a formula if $\text{vars}(\varphi) = \text{dom}(\beta)$.

Example Let $\beta = \{X_2 \mapsto 0, X_3 \mapsto 1\}$. Then $\beta$ fits $\neg X_3$, but not tightly. In addition, $\beta$ fits $X_3 \vee \neg X_2$ tightly, but it does not fit $X_5$ at all.

Fitting assignments can replace ordinary assignments.

Example It is ok to write $\llbracket \neg X_3 \rrbracket_{\{X_2 \mapsto 0, X_3 \mapsto 1\} }$, but it is not ok to write $\llbracket X_5\rrbracket_{\{X_2 \mapsto 0, X_3 \mapsto 1\} }$.

Relevant variables

In some sense, the coincidence lemma restricts the variables which are relevant for evaluating a formula to the variables that occur in the formula. In some cases, however, not each one of these variables is really relevant.

Example Consider the formula $X_5 \vee \neg X_5$. The truth value of this formula does not depend on the value of any variable, because it is $1$ regardless of the variable assignment it is evaluated under.

Here is a formal definition of relevance. A variable $X_i$ is said to be relevant for a formula $\varphi$ if there are variable assignments $\beta_0$ and $\beta_1$ such that

  1. $\beta_0(X_j) = \beta_1(X_j)$ for all $j \neq i$ and
  2. $\llbracket\varphi\rrbracket_{\beta_0} \neq \llbracket\varphi\rrbracket_{\beta_1}$.

The set of all relevant variables of a formula $\varphi$ is denoted $\text{rvars}(\varphi)$.

A consequence of the coincidence lemma is that $\text{rvars}(\varphi) \subseteq \text{vars}(\varphi)$ holds for every propositional formula $\varphi$.

Exercise Determine the relevant variables for the following formulas. Explain your results.

  1. $X_0 \rightarrow X_1$.
  2. $X_0 \vee \neg (X_0 \wedge X_3)$.
  3. $X_0 \wedge X_1 \rightarrow \neg X_4 \vee X_1$.

  1. $\operatorname{rvars}(X_0 \rightarrow X_1)=\{X_0, X_1\}$:
    $X_0\in \operatorname{rvars}(X_0 \rightarrow X_1)$ because for $\beta_0=\{X_0\mapsto 1, X_1\mapsto 0\}$ and $\beta_0=\{X_0\mapsto 0, X_1\mapsto 0\}$ it holds that $\llbracket X_0 \rightarrow X_1 \rrbracket_{\beta_0} = 0 \neq 1 = \llbracket X_0 \rightarrow X_1 \rrbracket_{\beta_1}$.
  2. $\operatorname{rvars}(X_0 \vee \neg (X_0 \wedge X_3))=\emptyset$ because $\llbracket X_0 \vee \neg (X_0 \wedge X_3)\rrbracket_\beta = 1$ for any assignment $\beta$, and thus there cannot be two assignments $\beta_0,\beta_1$ such that $\llbracket X_0 \vee \neg (X_0 \wedge X_3)\rrbracket_{\beta_0} \neq \llbracket X_0 \vee \neg (X_0 \wedge X_3)\rrbracket_{\beta_1} $.
  3. $\operatorname{rvars}(X_0 \wedge X_1 \rightarrow \neg X_4 \vee X_1))=\emptyset$ because $\llbracket X_0 \wedge X_1 \rightarrow \neg X_4 \vee X_1\rrbracket_\beta = 1$ for any assignment $\beta$, and thus there cannot be two assignments $\beta_0,\beta_1$ such that $\llbracket X_0 \wedge X_1 \rightarrow \neg X_4 \vee X_1\rrbracket_{\beta_0} \neq \llbracket X_0 \wedge X_1 \rightarrow \neg X_4 \vee X_1\rrbracket_{\beta_1} $.

Exercise Find a formula $\varphi$ such that $\emptyset\subsetneq\operatorname{rvars}(\varphi)\subsetneq \operatorname{vars}(\varphi)$.


For $\varphi = (X_0 \vee (X_1 \vee \top)) \wedge X_2$ it holds that $\emptyset\subsetneq \operatorname{rvars}(\varphi) = \{X_2\}\subsetneq \operatorname{vars}(\varphi) = \{X_0, X_1, X_2\}$.


  1. Prove that the coincidence lemma still holds when $\text{vars}(\varphi)$ is replaced by $\text{rvars}(\varphi)$.
  2. Show that there is no set strictly smaller than $\text{rvars}(\varphi)$ that can replace $\text{vars}(\varphi)$ in the coincidence lemma.

Truth tables

If $\varphi$ is a formula, then, according to the Coincidence lemma, the value $\llbracket\varphi\rrbracket_\beta$ is determined by $\llbracket\varphi\rrbracket_\beta$ for $\beta \colon \text{vars}(\varphi) \to \{0,1\}$. In a truth table for $\varphi$, there is a row for every such partial assignment and a column for subformula of $\varphi$. Typically, the subformulas are in non-decreasing order, which means the variables are listed first.

The entries for the variables are determined by the respective partial assignment; the entries for the boolean constants are obvious; the entries for the composite formulas are determined from other entries according to the inductive rule of the semantics.

Note that a truth table for a given formula is unique up to reordering of the rows and columns.

Example Let $\varphi = \neg X_0 \leftrightarrow (X_0 \wedge X_1)$. Then the truth table is

\(X_0\) \(X_1\) \(\neg X_0\) \(X_0 \wedge X_1\) \(\neg X_0 \leftrightarrow (X_0 \wedge X_1)\)
\(0\) \(0\) \(1\) \(0\) \(0\)
\(0\) \(1\) \(1\) \(0\) \(0\)
\(1\) \(0\) \(0\) \(0\) \(1\)
\(1\) \(1\) \(0\) \(1\) \(0\)