Knowledge representation
From Wikipedia, the free encyclopedia
This article is in need of attention from an expert on the subject. Please help recruit one or improve this article yourself. See the talk page for details. Please consider using {{Expert-subject}} to associate this request with a WikiProject. (September 2008) |
This article includes a list of references or external links, but its sources remain unclear because it lacks inline citations. Please improve this article by introducing more precise citations where appropriate. (February 2008) |
Knowledge representation is an area in artificial intelligence that is concerned with how to formally "think", that is, how to use a symbol system to represent "a domain of discourse" - that which can be talked about, along with functions that may or may not be within the domain of discourse that allow inference (formalized reasoning) about the objects within the domain of discourse to occur. Generally speaking, some kind of logic is used both to supply a formal semantics of how reasoning functions apply to symbols in the domain of discourse, as well as to supply (depending on the particulars of the logic), operators such as quantifiers, modal operators, etc. that along with an interpretation theory, give meaning to the sentences in the logic.
When we design a knowledge representation (and a knowledge representation system to interpret sentences in the logic in order to derive inferences from them) we have to make trades across a number of design spaces, described in the following sections. The single most important decision to be made, however is the expressivity of the KR. The more expressive, the easier (and more compact) it is to "say something". However, more expressive languages are harder to automatically derive inferences from. An example of a less expressive KR would be propositional logic. An example of a more expressive KR would be autoepistemic temporal modal logic. Less expressive KRs may be both complete and consistent (formally less expressive than set theory). More expressive KRs may be neither complete nor consistent.
The key problem is to find a KR (and a supporting reasoning system) that can make the inferences your application needs in time, that is, within the resource constraints appropriate to the problem at hand. This tension between the kinds of inferences an application "needs" and what counts as "in time" along with the cost to generate the representation itself makes knowledge representation engineering interesting.
Contents |
[edit] Overview
There are representation techniques such as frames, rules and semantic networks which have originated from theories of human information processing. Since knowledge is used to achieve intelligent behavior, the fundamental goal of knowledge representation is to represent knowledge in a manner as to facilitate inferencing (i.e. drawing conclusions) from knowledge.
Some issues that arise in knowledge representation from an AI perspective are:
- How do people represent knowledge?
- What is the nature of knowledge and how do we represent it?
- Should a representation scheme deal with a particular domain or should it be general purpose?
- How expressive is a representation scheme or formal language?
- Should the scheme be declarative or procedural?
There has been very little top-down discussion of the knowledge representation (KR) issues and research in this area is a well aged quiltwork. There are well known problems such as "spreading activation" (this is a problem in navigating a network of nodes), "subsumption" (this is concerned with selective inheritance; e.g. an ATV can be thought of as a specialization of a car but it inherits only particular characteristics) and "classification." For example a tomato could be classified both as a fruit and a vegetable.
In the field of artificial intelligence, problem solving can be simplified by an appropriate choice of knowledge representation. Representing knowledge in some ways makes certain problems easier to solve. For example, it is easier to divide numbers represented in Hindu-Arabic numerals than numbers represented as Roman numerals.
[edit] History of knowledge representation
In computer science, particularly artificial intelligence, a number of representations have been devised to structure information.
KR is most commonly used to refer to representations intended for processing by modern computers, and in particular, for representations consisting of explicit objects (the class of all elephants, or Clyde a certain individual), and of assertions or claims about them ('Clyde is an elephant', or 'all elephants are grey'). Representing knowledge in such explicit form enables computers to draw conclusions from knowledge already stored ('Clyde is grey').
Many KR methods were tried in the 1970s and early 1980s, such as heuristic question-answering, neural networks, theorem proving, and expert systems, with varying success. Medical diagnosis (e.g., Mycin) was a major application area, as were games such as chess.
In the 1980s formal computer knowledge representation languages and systems arose. Major projects attempted to encode wide bodies of general knowledge; for example the "Cyc" project (still ongoing) went through a large encyclopedia, encoding not the information itself, but the information a reader would need in order to understand the encyclopedia: naive physics; notions of time, causality, motivation; commonplace objects and classes of objects.
Through such work, the difficulty of KR came to be better appreciated. In computational linguistics, meanwhile, much larger databases of language information were being built, and these, along with great increases in computer speed and capacity, made deeper KR more feasible.
Several programming languages have been developed that are oriented to KR. Prolog developed in 1972,[1] but popularized much later, represents propositions and basic logic, and can derive conclusions from known premises. KL-ONE (1980s) is more specifically aimed at knowledge representation itself. In 1995, the Dublin Core standard of metadata was conceived.
In the electronic document world, languages were being developed to represent the structure of documents, such as SGML (from which HTML descended) and later XML. These facilitated information retrieval and data mining efforts, which have in recent years begun to relate to knowledge representation.
Development of the Semantic Web, has included development of XML-based knowledge representation languages and standards, including RDF, RDF Schema, Topic Maps, DARPA Agent Markup Language (DAML), Ontology Inference Layer (OIL), and Web Ontology Language (OWL).
[edit] Topics in Knowledge representation
[edit] Language and notation
Some people think it would be best to represent knowledge in the same way that it is represented in the human mind, or to represent knowledge in the form of human language.
Psycholinguistics is investigating how the human mind stores and manipulates language. Other branches of cognitive science examine how human memory stores sounds, sights, smells, emotions, procedures, and abstract ideas. Science has not yet completely described the internal mechanisms of the brain to the point where they can simply be replicated by computer programmers.
Various[which?] artificial languages and notations have been proposed for representing knowledge. They are typically based on logic and mathematics, and have easily parsed grammars to ease machine processing. They usually fall into the broad domain of ontologies.
[edit] Ontology languages
After CycL, a number of ontology languages have been developed. Most are declarative languages, and are either frame languages, or are based on first-order logic. Most of these languages only define an upper ontology with generic concepts, whereas the domain concepts are not part of the language definition. Gellish English is an example of an ontological language that includes a full engineering English Dictionary.
[edit] Links and structures
While hyperlinks have come into widespread use, the closely related semantic link is not yet widely used. The mathematical table has been used since Babylonian times. More recently, these tables have been used to represent the outcomes of logic operations, such as truth tables, which were used to study and model Boolean logic, for example. Spreadsheets are yet another tabular representation of knowledge. Other knowledge representations are trees, by means of which the connections among fundamental concepts and derivative concepts can be shown.
Visual representations are relatively new in the field of knowledge management but give the user a way to visualise how one thought or idea is connected to other ideas enabling the possibility of moving from one thought to another in order to locate required information. The approach is not without its competitors.[2]
[edit] Notation
The recent fashion in knowledge representation languages is to use XML as the low-level syntax. This tends to make the output of these KR languages easy for machines to parse, at the expense of human readability and often space-efficiency.
First-order predicate calculus is commonly used as a mathematical basis for these systems, to avoid excessive complexity. However, even simple systems based on this simple logic can be used to represent data that is well beyond the processing capability of current computer systems: see computability for reasons.
Examples of notations:
- DATR is an example for representing lexical knowledge
- RDF is a simple notation for representing relationships between and among objects
[edit] Storage and manipulation
One problem in knowledge representation consists of how to store and manipulate knowledge in an information system in a formal way so that it may be used by mechanisms to accomplish a given task. Examples of applications are expert systems, machine translation systems, computer-aided maintenance systems and information retrieval systems (including database front-ends).
Semantic networks may be used to represent knowledge. Each node represents a concept and arcs are used to define relations between the concepts. One of the most expressive and comprehensively described knowledge representation paradigms along the lines of semantic networks is MultiNet (an acronym for Multilayered Extended Semantic Networks).
From the 1960s, the knowledge frame or just frame has been used. Each frame has its own name and a set of attributes, or slots which contain values; for instance, the frame for house might contain a color slot, number of floors slot, etc.
Using frames for expert systems is an application of object-oriented programming, with inheritance of features described by the "is-a" link. However, there has been no small amount of inconsistency in the usage of the "is-a" link: Ronald J. Brachman wrote a paper titled "What IS-A is and isn't", wherein 29 different semantics were found in projects whose knowledge representation schemes involved an "is-a" link. Other links include the "has-part" link.
Frame structures are well-suited for the representation of schematic knowledge and stereotypical cognitive patterns. The elements of such schematic patterns are weighted unequally, attributing higher weights to the more typical elements of a schema. A pattern is activated by certain expectations: If a person sees a big bird, he or she will classify it rather as a sea eagle than a golden eagle, assuming that his or her "sea-scheme" is currently activated and his "land-scheme" is not.
Frame representations are object-centered in the same sense as semantic networks are: All the facts and properties connected with a concept are located in one place - there is no need for costly search processes in the database.
A behavioral script is a type of frame that describes what happens temporally; the usual example given is that of describing going to a restaurant. The steps include waiting to be seated, receiving a menu, ordering, etc. The different solutions can be arranged in a so-called semantic spectrum with respect to their semantic expressivity.
[edit] References
- ^ Timeline: A Brief History of Artificial Intelligence, AAAI
- ^ Other visual search tools are built by Convera Corporation, Entopia, Inc., EPeople Inc., and Inxight Software Inc.
[edit] Further reading
- Ronald J. Brachman; What IS-A is and isn't. An Analysis of Taxonomic Links in Semantic Networks; IEEE Computer, 16 (10); October 1983 [1]
- Ronald J. Brachman, Hector J. Levesque Knowledge Representation and Reasoning, Morgan Kaufmann, 2004 ISBN-13: 978-1-55860-932-7
- Ronald J. Brachman, Hector J. Levesque (eds) Readings in Knowledge Representation, Morgan Kaufmann, 1985, ISBN 0-934613-01-X
- Randall Davis, Howard Shrobe, and Peter Szolovits; What Is a Knowledge Representation? AI Magazine, 14(1):17-33,1993 [2]
- Ronald Fagin,Joseph Y. Halpern,Yoram Moses,Moshe Y. Vardi Reasoning About Knowledge, MIT Press, 1995, ISBN 0-262-06162-7
- Jean-Luc Hainaut, Jean-Marc Hick, Vincent Englebert, Jean Henrard, Didier Roland: Understanding Implementations of IS-A Relations. ER 1996: 42-57 [3]
- Hermann Helbig: Knowledge Representation and the Semantics of Natural Language, Springer, Berlin, Heidelberg, New York 2006
- Arthur B. Markman: Knowledge Representation Lawrence Erlbaum Associates, 1998
- John F. Sowa: Knowledge Representation: Logical, Philosophical, and Computational Foundations. Brooks/Cole: New York, 2000
- Adrian Walker, Michael McCord, John F. Sowa, and Walter G. Wilson: Knowledge Systems and Prolog, Second Edition, Addison-Wesley, 1990
[edit] See also
see also: Category:Knowledge representation
see also
[edit] External links
- What is a Knowledge Representation? by Randall Davis and others
- Introduction to Description Logics course by Enrico Franconi, Faculty of Computer Science, Free University of Bolzano, Italy
- DATR Lexical knowledge representation language
- Loom Project Home Page
- Description Logic in Practice: A CLASSIC Application
- The Rule Markup Initiative
- Schemas
- Nelements KOS - a generic 3d knowledge representation system