Cheap and Secure Web Hosting Provider : See Now

# [Solved]: Explanation of the knowledge representation hypothesis (Brian Smith)

, ,
Problem Detail:

In 1982 Brian Smith proposed his Knowledge Representation Hypothesis:

Any mechanically embodied intelligent process will be comprised of structural ingredients that

• we as external observers naturally take to represent a propositional account of the knowledge that the overall process exhibits, and
• independent of such external semantic attribution, play a formal but causal and essential role in engendering the behavior that manifests that knowledge.

Can someone simplify this statement or add some explanations to it?

In particular I don't understand what is meant by "propositional account" and "formal but causal role".

Thanks!

#### Answered By : Yuval Filmus

This is a philosophical statement which uses sophisticated language per convention. Here is another version (according to this page):

Any process capable of reasoning intelligently about the world must consist in part of a field of structures, of a roughly linguistic sort, which in some fashion represent whatever knowledge and beliefs the process may be said to possess.

You can visit the page I linked from some critique of both statements. For another critique, here is an example taken from these lecture notes. Consider the following two procedures for translating the colors red and blue into French:

``function translate1(color):   if color is "red" return "rouge"   if color is "blue" return "bleu"  function translate2(color):   dictionary = [ "red" -> "rouge", "blue" -> "bleu" ]   return dictionary[color] ``

According to the lecture notes, only the second function `translate2` conforms to the hypothesis.

Now for some positive example. Consider an automated translation service such as Google translate. As a vast simplification, Google has a dictionary that it uses to translate words from (say) English to French. This dictionary is a "propositional account" of the knowledge of the process. Here by "propositional account", Smith means a set of logical statements, for example:

The translation of red is rouge.

The translation of blue is bleu.

(Technically, he means first-order logic, so you would put these statements in concise logical form, or as Prolog statements.)

The translation program uses its dictionary in its efforts. Thus the dictionary plays a "causal ... role" in the "behavior" of the system. That is, since rouge is the counterpart of red, if you give the program red it outputs rouge. We don't claim any 'real' intelligence for the program, so this role is only "formal", and moreover our understanding of the dictionary as a list of matching words in two different languages is only an "external semantic attribution" that is irrelevant for explaining the behavior of the program. After all, the program doesn't really 'speak' English or French, it only gives the impression of being able to.

Let me try to put the hypothesis in simpler words:

Programs use databases that represent knowledge.

(See Surely you're joking, Mr Feynman!, page 281.)

Now we can come up with many more examples. JPEG compression programs use knowledge about the human visual system in the form of the quantization matrix, which explains which Fourier coefficients are more important for image representation. Recommendation systems use a database of products that can be recommended, and another database that keeps track of what other users liked. OCR systems use implicit representations of symbols (letters, digits, and punctuation) in the form of a machine learning recognizer for them.

Does modern machine learning conform to the hypothesis? Considering the last example, optical character recognition, the "catalog of characters" isn't stored as such, but only implicitly as (say) a set of weights in some neural network. This is certainly not a "field of structures, of a roughly linguistic sort", as per the other version of the hypothesis.

Modern artificial intelligence has largely moved away from the naive and romantic view of classical artificial intelligence, as exemplified by the KR hypothesis. Instead, now we often use statistical methods of machine learning which are much more successful in practice but much less satisfying from a human perspective.

Are knowledge representation techniques used in the real world? This is a question left for the experts to answer.