**Problem Detail:**

There exist several different types of logic -- 1st order, 2nd and higher order with many different sets of inference rules possible.

What I'm having trouble understanding is what's the "underlying logic" we use to talk about the various different types of logic. For instance, if we were limited to thinking in terms of $i$th order logic, then we couldn't talk about $>i$th order logics but we can. Does this mean that the logic we think in terms of is more powerful than any $n$th order logic?

Sorry this is so vague but I'm having a hard time formulating my question exactly.

Edit: Turing machines cannot even decide all statements in first-order logic -- so does this imply some sort of relation between first-order logic and any other logic that we can think of (as humans are no more powerful than TMs)?

###### Asked By : Opt

#### Answered By : Andrej Bauer

One answer is that we use English, but that's not what you're looking for.

To make the question more precise, let us ask what sort of formalism is useful for describing logics. This is still quite broad, because a logic has several parts: syntax, rules of inference, semantics, etc.

Syntax is pretty well understood and we know that the syntax of a language can be described in terms of formal grammar. That is a well developed theory. In any case, concrete syntax is rather boring from the point of view of logic.

It is far more interesting to ask what we need to specify the inference rules for a logic. Because a general logic can be just about any formal system, let us consider only traditional logics: classical first-order, intuitionistic first-order, second order, higher-order, simple type theory, dependent type theory and so on. One answer is that we should use a **logical framework** (LF). It is a variant of type theory designed in such a way that when we specify a logic in it we have:

A statement in the logic is provable if, and only if, the corresponding type in the logical framework is inhabited.

This is desirable because it says that the logical framework is neither too weak nor too powerful.

In terms of logical strength LF is very weak: it is a dependent type theory with first-order kinds and $\Pi$'s. Speaking vaguely and in terms of logical connectives, this is a bit like using as our meta-theory a logic which has only $\forall$ and $\Rightarrow$ (and no equality).

There are implementations of logical frameworks, for instance Twelf. Of course, the point of such a tool is not just description of logics, but also the study of their meta-theoretic properties. Among the Twelf case studies you can find formalization in LF of cut elimination, the double negation translation, classical modal logic S5, and a bunch of more programming-language oriented examples.

One may also ask what formalism is necessary to specify semantics of a logic. The semantics of traditional classical first-order logic goes under the name of *Tarski model theory* and uses set theory, although it is well known that a much weaker system suffices (a fragment of Peano arithmetic). Other kinds of logic require different approaches, but in general it is the case that the meta-theory needed for the description of semantics is much less complicated than the semantics itself. For instance, the models of intuitionistic higher-order logic are elementary toposes. The axioms for an elementary topos are essentially algebraic, i.e., we can express all the axioms of an elementary topos with equations (we do not even need *any* logical connectives, just equations), see Section II.2 of Lambek and Scott's "Introduction to Higher-Order Categorical Logic".

###### Best Answer from StackOverflow

Question Source : http://cs.stackexchange.com/questions/33559

**3.2K people like this**

## 0 comments:

## Post a Comment

Let us know your responses and feedback