Home :: Books :: Computers & Internet  

Arts & Photography
Audio CDs
Audiocassettes
Biographies & Memoirs
Business & Investing
Children's Books
Christianity
Comics & Graphic Novels
Computers & Internet

Cooking, Food & Wine
Entertainment
Gay & Lesbian
Health, Mind & Body
History
Home & Garden
Horror
Literature & Fiction
Mystery & Thrillers
Nonfiction
Outdoors & Nature
Parenting & Families
Professional & Technical
Reference
Religion & Spirituality
Romance
Science
Science Fiction & Fantasy
Sports
Teens
Travel
Women's Fiction
Foundations of Inductive Logic Programming (Lecture Notes in Artificial Intelligence)

Foundations of Inductive Logic Programming (Lecture Notes in Artificial Intelligence)

List Price: $74.95
Your Price: $74.95
Product Info Reviews

<< 1 >>

Rating: 4 stars
Summary: A prelude to automated scientific discovery
Review: Inductive logic programming (ILP) has come a long way since the early work of the 1970's. The last decade saw the field explode due to practical inductive programming languages being developed. The ability of machines to engage not only in experimentation but also in scientific hypothesis generation that is competitive with human scientists has recently been reported in the literature. It remains to be seen how efficacious these machines are in producing useful scientific hypothesis, but the fact remains that inductive logic programming has played a major role in these developments, and this will no doubt continue in the years to come. This book gives a good overview of ILP from a foundational and theoretical viewpoint. Due to lack of space only selected chapters will be reviewed here.

After a thorough overview of logic programming in the first part of the book, the discussion of inductive logic programming begins in part 2, namely in chapter 9, wherein the authors begin by defining induction as learning a general theory from specific examples. Inductive logic programming is characterized as the 'intersection of machine learning and logic programming', whose goal is to learn from examples within the framework of clausal logic. The examples and the 'background knowledge' are clauses, and the theory derived from them will also consist of clauses. The examples consist of 'positive', which are true, and 'negative' examples, which are false. The examples are usually ground atoms or ground clauses, depending on the approach used for generalization. In order for the eventual theory to be meaningful, it must, along with the background knowledge, be 'complete' (imply the positive examples), and 'consistent' (not contradict the negative examples). A theory that is both complete and consistent is said to be 'correct'. It is not assumed that the correct theory will be unique. In fact, the authors assume that there may be many "hidden" theories that could be extracted from the examples and background knowledge. The discovery of a satisfactory theory thus implies that a search be made in the search space of permissible clauses.

The authors distinguish between 'batch learning', wherein all the examples are given right away, and 'incremental learning', where the learning takes place on examples one at a time. Also addressed is the need for bias in the search for theories, and the resulting trade-off in efficiency and the quality of the resulting theory. 'Predicate invention' is described as something that might be needed for successful theory construction. The authors stress that the results of the book can be applicable to a nonmonotonic setting.

A more formal discussion of ILP is given in chapter 10. A clausal language consisting of the 'observational language' that includes the positive and negative examples, and the 'hypothetical language' that is used to formulate the theory, is considered. An oracle is used to obtain the truth-values of the examples. The authors discuss how to weaken a theory by using backtracking, and how to strengthen a theory using refinement operators.

Chapter 11 discusses 'inverse resolution' which is the tour de force of inductive theory discovery. Although not required for understanding the rest of the book, this chapter does introduce the reader to a very important strategy for bottom-up approaches to ILP.

In chapter 12, 'unfolding' is introduced as a specialization technique that is dual to inverse resolution, and consists of constructing resolvents from given parent clauses. It is used to correct a theory that is overly general. UDS specialization is discussed as a specialization technique that performs a finite number of applications of unfolding, clause deletion, and subsumption on a definite program, and is shown to be complete.

Using the results on lattices in chapter 13, namely the lattice of atoms quasi-ordered by subsumption, the authors show that clausal languages and Horn clauses are lattices under subsumption in chapter 14. This means that every finite collection of clauses has a least generalization and a greatest specialization under subsumption. They also show that one cannot generalize this to arbitrary clauses, in that there exists clauses that do not have finite complete sets of downward or upward covers. All of these results depend on defining a strict order on clauses, which is called the 'atomic order'. Particularly interesting in this discussion is the complexity measure that the authors introduce on the set of clauses. This measure is not based on size, but instead is a pair of coordinates, where the first coordinate is the size of the largest literal in the clause, and the second coordinate is the number of literals in the clause.

The authors take up the difficulties of doing implication between clauses in chapter 15. Subsumption is prevalent in ILP because it is decidable, whereas implication is not. The authors give examples illustrating that subsumption is weaker than implication, and that subsumption is not sufficient for the construction of least generalizations. Recognizing that Horn clauses do not have a least generalization under implication, the authors give criteria for when a finite set of clauses does have a least generalization, namely that set must contain at least one function-free non-tautologous clause. For the case of greatest specialization, they show that every finite set of clauses has a greatest specialization under implication.

The role that background knowledge plays in subsumption and implication is the subject of chapter 16. This brings up the topic of 'relative subsumption', which goes way back to the beginnings of inductive logic programming. The authors address the existence of least generalizations under relative subsumption by first giving a counterexample showing that in general they do not. They then give criteria that guarantee the existence of least generalization under relative subsumption. The authors then discuss relative implication, showing that relative subsumption does not imply relative implication, and then give criteria for the existence of least generalizations under relative implication. Also studied is a notion of generalized subsumption which applies only to definite program clauses. The relation of this notion to implication and subsumption is discussed in detail.

Rating: 5 stars
Summary: An excellent text on logic programming!
Review: This book is broken into two parts: logic programing, and inductive logic programming. Before I read this book I had previous exposure to automated reasoning and prolog, but not a good understanding of how they were related. The first part of this book (logic) did an amazing job in moving from propositional logic all the way to the theory behind prolog; and by the end of this part I saw the "big picture". The second half of the book seemed less magical, but certainly just as interesting nonetheless. The chapter on unfolding seemed very good and I particulary was intrigued by the attention paid to clausal languages and the different types of generality lattices that can be defined over both the atoms and clauses. My first reaction to those chapters was "how can one design hierarchical neural networks to support efficient traversals along these lattices structures?". May be another book will address such a hybrid approach in the near future. There is also a nice chapter on PAC learning which fills a void in the current literature. The writing and examples seemed both very clear and well-chosen throughout the entire book. In conclusion, I highly recommend this to anyone interested in prolog, automated reasoning, and/or machine learning.


<< 1 >>

© 2004, ReviewFocus or its affiliates