Home :: Books :: Computers & Internet  

Arts & Photography
Audio CDs
Audiocassettes
Biographies & Memoirs
Business & Investing
Children's Books
Christianity
Comics & Graphic Novels
Computers & Internet

Cooking, Food & Wine
Entertainment
Gay & Lesbian
Health, Mind & Body
History
Home & Garden
Horror
Literature & Fiction
Mystery & Thrillers
Nonfiction
Outdoors & Nature
Parenting & Families
Professional & Technical
Reference
Religion & Spirituality
Romance
Science
Science Fiction & Fantasy
Sports
Teens
Travel
Women's Fiction
Neural Networks and Analog Computation: Beyond the Turing Limit (Progress in Theoretical Computer Science)

Neural Networks and Analog Computation: Beyond the Turing Limit (Progress in Theoretical Computer Science)

List Price: $71.95
Your Price: $71.95
Product Info Reviews

<< 1 >>

Rating: 5 stars
Summary: Hypercomputation in the limits of classical physical reality
Review: A computer is an artifact. Through specific control mechanisms of electric currents it was possible to domesticate natural phenomena, and put them at men service, giving rise to the levels of automation that characterize the world in the turning of the millennium. But a computer is an analog artifact. Paul Cull, from Oregon State University, states this computational anecdote in the following terms: «That analog devices behave digitally is the basis for a large part of electronics engineering and allows for the construction of electronic computers. It is part of the engineering folklore that when the gain is high enough any circuit from a large class will eventually settle into one of two states, which can be used to represent booleans 0 and 1. As far as we can tell, this theorem and its proof has never been published, but it probably appears in a now unobtainable MIT technical report of the1950s.» Recently much work have been done to show that digital computers are a particular class of analog computers that exhibit greater computational power. In fact, digital computers are extreme (weak) analog computers. A book was needed to introduce these ideas to the graduate student on Theoretical Computer Science and to the general researcher on the new field of Non-standard Models of Computation. Hava Siegelmann's book partially fills this gap in the computational literature.

Over the last decade, researchers have speculated that although the Turing model is indeed able to simulate a large class of computations, it does not necessarily provide a complete picture of the computations possible in nature. As pointed out by Hava Siegelmann, the most famous proposals of new models were made by Richard Feynman and Roger Penrose. Feynman suggested making use of the non-locality of quantum physics. Penrose, who was motivated by the model of the human brain, argued that the Turing model of computing is not strong enough to model biological intelligence. In response, several novel models of computation have been put forth: among them the quantum Turing machine and the DNA computer. These models compute faster than Turing machines and thus are richer under time constraints. However they cannot compute non-recursive functions, and in this sense are not inherently more powerful than the classical model. The analog recurrent neural network model of Hava Siegemann computes more than the Turing machine, not only under time-constraints, but also in general. In this sense it can be referred to as a hypercomputation model.

The use of analog recurrent neural networks for computability analysis is due to Hava Siegelmann and Eduardo Sontag. In Hava Siegelmann's book, she used them to establish lower bounds on their computational power. These systems satisfy the classical constraints of computation theory, namely, (a) input is discrete (binary) and finite, (b) output is discrete (binary) and finite, and (c) the system is itself finite (control is finite). The infiniteness may originate from two different sources: the system is influenced by a real value, which can be a physical constant, directly affecting the computation, a probability of a biased binary random coin or any other process; the infiniteness may also come from the operations of an adaptive process interleaved with the computation process, like is the case in our brains. Neurons may hold values within [0,1] with unbounded precision. To work with such analog systems, binary input is encoded into a rational number between 0 and 1, and the rational output is decoded into an output binary sequence. The technique used in this book consists of an encoding of binary words into the Cantor Set of base 4. Within this (number-theoretic) model, finite binary words are encoded as rational numbers in [0,1]. We may then identify the set of computable functions by analog recurrent neural nets, provided that the type of the weights is given. This research program has been systematically pursued by Hava Siegelmann at the Technion and her collaborators.

The first level of nets is NET[integers]. These nets are historically related with the work of Warren McCulloch and Walter Pitts. As the weights are integer numbers, each processor can only compute a linear combination of integer coefficients applied to zeros and ones. The activation values are thus always zero or one. In this case the nets 'degenerate' into classical devices called finite automata. It was Kleene who first proved that McCulloch and Pitts nets are equivalent to finite automata and therefore they were able to recognize all regular languages. But they are not capable of recognizing well-formed parenthetic expressions or to recognize the nucleic acids for these structures are not regular...

The second relevant class Hava Siegelmann considers is NET[rationals]. Rationals are indeed computable numbers in finite time, and NET[rationals] turn to be equivalent to Turing machines. Twofold equivalent: rational nets compute the same functions as Turing machines and, under appropriate encoding of input and output, they are able to compute the same functions in exactly the same time. Even knowing that rationals are provided for free in nature, rationals of increasing complexity, this ressource do not even speed up computations with regard to Turing machines. The class NET[rationals] coincide with the class of (partial) recursive functions of Kurt Gödel and Kleene. About them it is said that they constitute the whole concrete, realizable, mathematical universe.

The third relevant (and maybe surprising to the reader) class is NET[reals]. Reals are indeed in general non computable. But theories of physics abound that consider real variables. If the reader look at these theories from a more epistemological point of view as approximative models, then we argue that while some alternative theories are not available, if the old models can encode hypercomputations, then they are not simulable in digital computers. The advantage of making a theory of computation on top of these systems is that nonuniform classes of computation, namely the classes that arise in complexity theory using Turing machines with advice, are uniformly described in NET[reals]. As shown in Hava Siegelmann's book all sets over finite alphabets can be represented as reals that encode the families of boolean circuits that recognize them. Under efficient time computation, these networks compute not only all efficient computations by Turing machines but also some non-recursive functions such as (a unary encoding of) the halting problem of Turing machines.

A novel connection between the complexity of the networks in terms of information theory and their computational complexity is developed, spanning a hierarchy of computation from the Turing to the fully analog model. This leads to the statement of the Siegelmann-Sontag thesis of 'hypercomputation by analog systems' analogously to the Church-Turing thesis of 'computation by digital systems'.

A beautiful non-standard theory of computation is presented in 'Neural Networks and Analog Computation'. I strongly recommend the careful reading of Hava Siegelmann's book, to enjoy the uniformity of nets description and to ponder where hypercomputation begins in the limits of classical physical reality.

Rating: 1 stars
Summary: Cogently argued but fatally flawed
Review: Some of this book is an interesting discussion of the boundries of computability. However, the book's central claim, that you can exceed the Turing limit, requires the storing of infinitely precise variables in a physical device. This is a physical impossibility which no amount of gratuitous logical notation will make go away. Even if you put aside the difficulties of measuring a value to infinite precision, quantum indeterminance and discontinuity will not allow any physical object to store or encode an infinitely precise value in any fashion. Once this premise is seen to be false, most of the other interesting claims in the book, and all the hypercomputational ones, immediately collapse.

Rating: 1 stars
Summary: Lots of notation, little content
Review: This book certainly claims to give much much more than what It actually provides. Trying to read this book, you'll have to swallow a formalism that unfortunately does not pay off. There is absolutely no revolutionary idea, just well known facts and pretention to do better than a TM but based on assumptions that by their sole existence, suffice to do better than any Turing machine, you don't need a whole book to say this. (namely, working with arbitrary precision).


<< 1 >>

© 2004, ReviewFocus or its affiliates