Shortcuts
Top of page (Alt+0)
Page content (Alt+9)
Page menu (Alt+8)
Your browser does not support javascript, some WebOpac functionallity will not be available.
.
Default
.
PageMenu
-
Main Menu
-
Simple Search
.
Advanced Search
.
Journal Search
.
Refine Search Results
.
Preferences
.
Search Menu
Simple Search
.
Advanced Search
.
New Items Search
.
Journal Search
.
Refine Search Results
.
Bottom Menu
Help
Italian
.
English
.
German
.
New Item Menu
New Items Search
.
New Items List
.
Links
SISSA Library
.
ICTP library
.
Italian National web catalog (SBN)
.
Trieste University web catalog
.
Udine University web catalog
.
© LIBERO v6.4.1sp220816
Page content
You are here
:
Catalogue Tag Display
Catalogue Tag Display
MARC 21
Code Recognition and Set Selection with Neural Networks
Tag
Description
020
$a9781461232162$9978-1-4612-3216-2
082
$a511.3$223
099
$aOnline resource: Springer
245
$aCode Recognition and Set Selection with Neural Networks$h[EBook] /$cedited by Clark Jeffries.
260
$aBoston, MA :$bBirkhäuser Boston,$c1991.
300
$bonline resource.
336
$atext$btxt$2rdacontent
337
$acomputer$bc$2rdamedia
338
$aonline resource$bcr$2rdacarrier
440
$aMathematical Modeling ;$v7
505
$a
0—The Neural Network Approach to Problem Solving -- 0.1 Defining a Neural Network -- 0.2 Neural Networks as Dynamical Systems -- 0.3 Additive and High Order Models -- 0.4 Examples -- 0.5 The Link with Neuroscience -- 1—Neural Networks as Dynamical Systems -- 1.1 General Neural Network Models -- 1.2 General Features of Neural Network Dynamics -- 1.3 Set Selection Problems -- 1.4 Infeasible Constant Trajectories -- 1.5 Another Set Selection Problem -- 1.6 Set Selection Neural Networks with Perturbations -- 1.7 Learning -- Problems and Answers -- 2—Hypergraphs and Neural Networks -- 2.1 Multiproducts in Neural Network Models -- 2.2 Paths, Cycles, and Volterra Multipliers -- 2.3 The Cohen-Grossberg Function -- 2.4 The Foundation Function ? -- 2.5 The Image Product Formulation of High Order Neural Networks -- Problems and Answers -- 3—The Memory Model -- 3.1 Dense Memory with High Order Neural Networks -- 3.2 High Order Neural Network Models -- 3.3 The Memory Model -- 3.4 Dynamics of the Memory Model -- 3.5 Modified Memory Models Using the Foundation Function -- 3.6 Comparison of the Memory Model and the Hopfield Model -- Problems and Answers -- 4—Code Recognition, Digital Communications, and General Recognition -- 4.1 Error Correction for Binary Codes -- 4.2 Additional Tests of the Memory Model as a Decoder -- 4.3 General Recognition -- 4.4 Scanning in Image Recognition -- 4.5 Commercial Neural Network Decoding -- Problems and Answers -- 5—Neural Networks as Dynamical Systems -- 5.1 A Two-Dimensional Limit Cycle -- 5.2 Wiring -- 5.3 Neural Networks with a Mixture of Limit Cycles and Constant Trajectories -- Problems and Answers -- 6—Solving Operations Research Problems with Neural Networks -- 6.1 Selecting Permutation Matrices with Neural Networks -- 6.2 Optimization in a Modified Permutation Matrix Selection Model -- 6.3 The Quadratic Assignment Problem -- Appendix A—An Introduction to Dynamical Systems -- A.1 Elements of Two-Dimensional Dynamical Systems -- A.2 Elements of n-Dimensional Dynamical Systems -- A.3 The Relation Between Difference and Differential Equations -- A.4 The Concept of Stability -- A.5 Limit Cycles -- A.6 Lyapunov Theory -- A.7 The Linearization Theorem -- A.8 The Stability of Linear Systems -- Appendix B—Simulation of Dynamical Systems with Spreadsheets -- References -- Index of Key Words -- Epilog.
520
$a
In mathematics there are limits, speed limits of a sort, on how many computational steps are required to solve certain problems. The theory of computational complexity deals with such limits, in particular whether solving an n-dimensional version of a particular problem can be accomplished with, say, 2 n n steps or will inevitably require 2 steps. Such a bound, together with a physical limit on computational speed in a machine, could be used to establish a speed limit for a particular problem. But there is nothing in the theory of computational complexity which precludes the possibility of constructing analog devices that solve such problems faster. It is a general goal of neural network researchers to circumvent the inherent limits of serial computation. As an example of an n-dimensional problem, one might wish to order n distinct numbers between 0 and 1. One could simply write all n! ways to list the numbers and test each list for the increasing property. There are much more efficient ways to solve this problem; in fact, the number of steps required by the best sorting algorithm applied to this problem is proportional to n In n .
538
$aOnline access to this digital book is restricted to subscription institutions through IP address (only for SISSA internal users)
700
$aJeffries, Clark.$eeditor.
710
$aSpringerLink (Online service)
830
$aMathematical Modeling ;$v7
856
$u
http://dx.doi.org/10.1007/978-1-4612-3216-2
Quick Search
Search for