1  5
Number of results to display per page
1. Neurons : a mathematical ignition [2015]
 Hata, Masayoshi, author.
 Singapore ; Hackensack, NJ : World Scientific Publishing Co., [2015]
 Description
 Book — xiii, 216 pages ; 24 cm.
 Summary

 This unique volume presents a fruitful and beautiful mathematical world hidden in Caianiello's neuronic equations, which describe the instantaneous behavior of a model of a brain or thinking machine. The detailed analysis from a viewpoint of "dynamical systems", even in a single neuron case, enables us to obtain amazingly good rational approximations to the HeckeMahler series with two variables. Some interesting numerical applications of our rational approximations are also discussed.This book is fundamentally selfcontained and many topics required in it are explained from the beginning. Each chapter contains a number of instructive and mostly original exercises at various levels.
 (source: Nielsen Book Data)
(source: Nielsen Book Data)
Science Library (Li and Ma)
Science Library (Li and Ma)  Status 

Stacks  
QA76.87 .H38 2015  Unknown 
 Smith, James E. (James Edward), 1950, author.
 [San Rafael, California] : Morgan & Claypool, 2017.
 Description
 Book — 1 PDF (xxv, 215 pages).
 Summary

 Part I. Introduction to spacetime computing and temporal neural networks
 1. Introduction
 1.1 Basics of neuron operation
 1.2 Spacetime communication and computation
 1.2.1 Communication
 1.2.2 Computation
 1.2.3 Discussion
 1.3 Background: neural network models
 1.3.1 Rate coding
 1.3.2 Temporal coding
 1.3.3 Rate processing
 1.3.4 Spike processing
 1.3.5 Summary and taxonomy
 1.4 Background: machine learning
 1.5 Approach: interaction of computer engineering and neuroscience
 1.6 Bottomup analysis: a guiding analogy
 1.7 Overview 
 2. Spacetime computing
 2.1 Definition of terms
 2.2 Feedforward computing networks
 2.3 General TNN model
 2.4 Spacetime computing systems
 2.5 Implications of invariance
 2.6 TNN system architecture
 2.6.1 Training
 2.6.2 Computation (evaluation)
 2.6.3 Encoding
 2.6.4 Decoding
 2.7 Summary: metaarchitecture
 2.7.1 Simulation
 2.7.2 Implied functions
 2.8 Special case: feedforward McCullochPitts networks
 2.9 Race logic 
 3. Biological overview
 3.1 Overall brain structure (very brief )
 3.2 Neurons
 3.2.1 Synapses
 3.2.2 Synaptic plasticity
 3.2.3 Frequencycurrent relationship
 3.2.4 Inhibition
 3.3 Hierarchy and columnar organization
 3.3.1 Neurons
 3.3.2 Columns (microcolumns)
 3.3.3 Macrocolumns
 3.3.4 Regions
 3.3.5 Lobes
 3.3.6 Uniformity
 3.4 Interneuron connections
 3.4.1 Path distances
 3.4.2 Propagation velocities
 3.4.3 Transmission delays
 3.4.4 Numbers of connections
 3.4.5 Attenuation of excitatory responses
 3.4.6 Connections summary
 3.5 Sensory processing
 3.5.1 Receptive fields
 3.5.2 Saccades and whisks
 3.5.3 Vision pathway
 3.5.4 Waves of spikes
 3.5.5 Feedforward processing path
 3.5.6 Precision
 3.5.7 Information content
 3.5.8 Neural processing
 3.6 Oscillations
 3.6.1 Theta oscillations
 3.6.2 Gamma oscillations 
 Part II. Modeling temporal neural networks
 4. Connecting TNNs with biology
 4.1 Communication via voltage spikes
 4.2 Columns and spike bundles
 4.3 Spike synchronization
 4.3.1 Aperiodic synchronization: saccades, whisks, and sniffs
 4.3.2 Periodic synchronization
 4.4 First spikes carry information
 4.5 Feedforward processing
 4.6 Simplifications summary
 4.7 Plasticity and training
 4.8 Fault tolerance and temporal stability
 4.8.1 Interwoven fault tolerance
 4.8.2 Temporal stability
 4.8.3 Noise (or lack thereof )
 4.9 Discussion: reconciling biological complexity with model simplicity
 4.10 Prototype architecture overview 
 5. Neuron modeling
 5.1 Basic models
 5.1.1 Hodgkin Huxley neuron model
 5.1.2 Derivation of the leaky integrate and fire (LIF) model
 5.1.3 Spike response model (SRM0)
 5.2 Modeling synaptic connections
 5.3 Excitatory neuron implementation
 5.4 The menagerie of LIF neurons
 5.4.1 Synaptic conductance model
 5.4.2 Biexponential SRM0 model
 5.4.3 Single stage SRM0
 5.4.4 Linear leak integrate and fire (LLIF)
 5.5 Other neuron models
 5.5.1 Alpha function
 5.5.2 Quadratic integrateandfire
 5.6 Synaptic plasticity and training 
 6. Computing with excitatory neurons
 6.1 Single neuron clustering
 6.1.1 Definitions
 6.1.2 Excitatory neuron function, approximate description
 6.1.3 Looking ahead
 6.2 Spike coding
 6.2.1 Volleys
 6.2.2 Nonlinear mappings
 6.2.3 Distance functions
 6.3 Prior work: radial basis function (RBF) neurons
 6.4 Excitatory neuron I: training mode
 6.4.1 Modeling excitatory response functions
 6.4.2 Training set
 6.4.3 STDP update rule
 6.4.4 Weight stabilization
 6.5 Excitatory neuron I: compound response functions
 6.6 Excitatory neuron model II
 6.6.1 Neuron model derivation
 6.6.2 Training mode
 6.6.3 Evaluation mode
 6.7 Attenuation of excitatory responses
 6.8 Threshold detection
 6.9 Excitatory neuron model II summary 
 7. System architecture
 7.1 Overview
 7.2 Interconnection structure
 7.3 Input encoding
 7.4 Excitatory column operation
 7.4.1 Evaluation
 7.4.2 Training
 7.4.3 Unsupervised synaptic weight training
 7.4.4 Supervised weight training
 7.5 Inhibition
 7.5.1 Feedback inhibition
 7.5.2 Lateral inhibition
 7.5.3 Feedforward inhibition
 7.6 Volley decoding and analysis
 7.6.1 Temporal flattening
 7.6.2 Decoding to estimate clustering quality
 7.6.3 Decoding for classification
 7.7 Training inhibition
 7.7.1 FFI: establishing tF and kF
 7.7.2 LI: establishing tL and kL
 7.7.3 Excitatory neuron training in the presence of inhibition
 Part III: extended design study: clustering the MNIST dataset 
 8. Simulator implementation
 8.1 Simulator overview
 8.2 Interunit communication
 8.3 Simulating time
 8.4 Synaptic weight training
 8.5 Evaluation
 8.5.1 EC block
 8.5.2 IC block
 8.5.3 VA block
 8.6 Design methodology 
 9. Clustering the MNIST dataset
 9.1 MNIST workload
 9.2 Prototype clustering architecture
 9.3 OnOff encoding
 9.4 IntraCC network
 9.5 Excitatory column (EC)
 9.6 Lateral inhibition
 9.7
 144 RFs
 9.8 Feedforward inhibition
 9.9 Layer
 1 result summary
 9.10 Related work
 9.11 Considering layer
 2 
 10. Summary and conclusions
 References
 Author biography.
 Gallant, Stephen I.
 Cambridge, Mass. : MIT Press, c1993.
 Description
 Book — xvi, 365 p. : ill. ; 24 cm.
 Summary

 FOREWORD
 1. Introduction and Important Definitions
 2. Representation Issues
 3. Perceptron Learning and the Pocket Algorithm
 4. WinnerTakeAll Groups or Linear Machines
 5. Autoassociators and OneShot Learning
 6. Mean Squared Error (MSE) Algorithms
 7. Unsupervised Learning
 8. The Distributed Method and Radial Basis Functions
 9. Computational Learning Theory and the BRD Algorithm
 10. Constructive Algorithms
 11. Backpropagation
 12. Backpropagation: Variations and Applications
 13. Simulated Annealing and Boltzmann Machines
 14. Expert Systems and Neural Networks
 15. Details of the MACIE System
 16. Noise, Redundancy, Fault Detection, and Bayesian Decision Theory
 17. Extracting Rules from networks
 Appendix Representation Comparisons
 Bibliography
 INDEX.
(source: Nielsen Book Data)
SAL3 (offcampus storage)
SAL3 (offcampus storage)  Status 

Stacks  Request 
QA76.87 .G35 1993  Available 
 Jones, C. K. R. T. (Christopher K. R. T.)
 New York : Springer, ©2001.
 Description
 Book — 1 online resource (xi, 263 pages) : illustrations.
 Summary

 Foreword. Preface. Homoclinic orbits to invariant tori in Hamiltonian systems. Geometric singular perturbation theory beyond normal hyperbolicity. A primer on the exchange lemma for fastslow systems. Geometric analysis of the singularly perturbed planar fold. Multiple time scales and canards in a chemical oscillator. A geometric method for periodic orbits in singularlyperturbed systems. The phenomenon of delayed bifurcation and its analyses. Synchrony in networks of neuronal oscillators. Metastable dynamics and exponential asymptotics in multidimensional domains. List of workshop participants.
 (source: Nielsen Book Data)
(source: Nielsen Book Data)
 Kohonen, Teuvo.
 2nd ed.  Berlin ; New York : Springer, ©1997.
 Description
 Book — 1 online resource (xvii, 426 pages) : illustrations.
 Summary

 Mathematical preliminaries justification of neural modelling the basic SOM pysiological interpretation of SOM variants of SOM learning vector quantization applications hardware for SOM an overview of SOM literature glossary of "neural" terms.
 (source: Nielsen Book Data)
(source: Nielsen Book Data)
Articles+
Journal articles, ebooks, & other eresources
 Articles+ results include