default search action
Neural Networks, Volume 14
Volume 14, Number 1, January 2001
- Thomas J. Anastasio:
A pattern correlation model of vestibulo-ocular reflex habituation. 1-22 - Johan A. K. Suykens, Joos Vandewalle, Bart De Moor:
Optimal control by least squares support vector machines. 23-35 - Vladimir Cherkassky, Xuhui Shao:
Signal estimation and denoising using VC-theory. 37-52 - Masashi Sugiyama, Hidemitsu Ogawa:
Incremental projection learning for optimal generalization. 53-66 - Masashi Sugiyama, Hidemitsu Ogawa:
Properties of incremental projection learning. 67-78 - Akiko Nakashima, Akira Hirabayashi, Hidemitsu Ogawa:
Error correcting memorization learning for noisy training examples. 79-92 - Armando Blanco, Miguel Delgado, Maria del Carmen Pegalajar Jiménez:
A real-coded genetic algorithm for training recurrent neural networks. 93-105 - Jooyoung Park, H.-Y. Kim, Y. Park, S.-W. Lee:
A synthesis procedure for associative memories based on space-varying cellular neural networks. 107-113 - Gee-Hyuk Lee, Nabil H. Farhat:
The Bifurcating Neuron Network 1. 115-131
Volume 14, Number 2, March 2001
- Jean-Jacques E. Slotine, Winfried Lohmiller:
Modularity, evolution, and the binding problem: a view from stability theory. 137-145 - Chi-Sing Leung, Kwok-Wo Wong, Pui-Fai Sum, Lai-Wan Chan:
A pruning method for the recursive least squared algorithm. 147-174 - Pierre Courrieu:
Two methods for encoding clusters. 175-183 - Nam Mai-Duy, Thanh Tran-Cong:
Numerical solution of differential equations using multiquadric radial basis function networks. 185-199 - Tomohiro Shibata, Stefan Schaal:
Biomimetic gaze stabilization based on feedback-error-learning with nonparametric regression networks. 201-216 - Chuangyin Dang, Lei Xu:
A globally convergent Lagrange and barrier function iterative algorithm for the traveling salesman problem. 217-230 - Alexander Nikov, Stefka Stoeva:
Quick fuzzy backpropagation algorithm. 231-244
Volume 14, Number 3, April 2001
- Tianping Chen, Shun-ichi Amari:
New theorems on global convergence of some dynamical systems. 251-255 - Jouko Lampinen, Aki Vehtari:
Bayesian approach for neural networks--review and case studies. 257-274 - Jan Storck, Frank Jäkel, Gustavo Deco:
Temporal clustering with spiking neurons and dynamic synapses: towards technological applications. 275-285 - Victoria J. Hodge, Jim Austin:
An evaluation of standard retrieval algorithms and a binary neural approach. 287-303 - Hiok Chai Quek, K. B. Tan, Vijay K. Sagar:
Pseudo-outer product based fuzzy neural network fingerprint verification system. 305-323 - Eric Granger, Mark A. Rubin, Stephen Grossberg, Pierre Lavoie:
A What-and-Where fusion neural network for recognition and tracking of multiple radar emitters. 325-344 - Ladan Shams, Mark J. Brady, Stefan Schaal:
Graph matching vs mutual information maximization for object detection. 345-354 - Birsel Ayrulu, Billur Barshan:
Neural networks for improved target differentiation and localization with sonar. 355-373
Volume 14, Number 4-5, May 2001
- Yasuhiro Wada, Yuichi Kaneko, Eri Nakano, Rieko Osu, Mitsuo Kawato:
Quantitative examinations for multi joint arm trajectory planning--using a robust calculation algorithm of the minimum commanded torque change trajectory. 381-393 - Masataka Watanabe, Kousaku Nakanishi, Kazuyuki Aihara:
Solving the binding problem of the brain with bi-directional functional connectivity. 395-406 - J. Manuel Cano Izquierdo, Yannis A. Dimitriadis, Eduardo Gómez-Sánchez, Juan López Coronado:
Learning from noisy information in FasArt and FasBack neuro-fuzzy systems. 407-425 - Francesco Vivarelli, Christopher K. I. Williams:
Comparing Bayesian neural network algorithms for classifying segmented outdoor images. 427-437 - Friedhelm Schwenker, Hans A. Kestler, Günther Palm:
Three learning phases for radial-basis-function networks. 439-458 - Akiko Nakashima, Hidemitsu Ogawa:
Noise suppression in training examples for improving generalization capability. 459-469 - Edmondo Trentin:
Networks with trainable amplitude of activation functions. 471-493 - Marifi Güler:
A model with an intrinsic property of learning higher order correlations. 495-504 - Marcos M. Campos, Gail A. Carpenter:
S-TREE: self-organizing trees for data clustering and online vector quantization. 505-525 - Sorin Draghici:
The constraint based decomposition (CBD) training architecture. 527-550 - Fred H. Hamker:
Life-long learning Cell Structures--continuously learning without catastrophic interference. 551-573 - Amar Mitiche, M. Lebidoff:
Pattern classification by a condensed neural network. 575-580
Volume 14, Number 6-7, July 2001
- Stephen Grossberg, Wolfgang Maass, Henry Markram:
Introduction: Spiking Neurons in Neuroscience and Technology. 587- - George L. Gerstein, Kyle L. Kirkland:
Neural assemblies: technical issues, analysis, and modeling. 589-598 - Wulfram Gerstner:
Coding properties of spiking neurons: reverse and cross-correlations. 599-610 - Hiroyuki Uchiyama, Koichi Goto, Hiroyuki Matsunobu:
ON-OFF retinal ganglion cells temporally encode OFF/ON sequence. 611-615 - André van Schaik:
Building blocks for electronic spiking neural networks. 617-628 - Shih-Chii Liu, Jörg Kramer, Giacomo Indiveri, Tobi Delbrück, Thomas Burg, Rodney J. Douglas:
Orientation-selective aVLSI spiking neurons. 629-643 - Kai M. Hynna, Kwabena Boahen:
Space-rate coding in an adaptive silicon neuron. 645-656 - Marc-Oliver Gewaltig, Markus Diesmann, Ad Aertsen:
Propagation of cortical synfire activity: survival probability in single trials and stability in the mean. 657-673 - Hideyuki Câteau, Tomoki Fukai:
Fokker-Planck approach to the pulse packet propagation in synfire chain. 675-685 - Luis Fernando Lago-Fernández, Fernando J. Corbacho, Ramón Huerta:
Connection topology dependence of synchronization of neural assemblies on class 1 and 2 excitability. 687-696 - Ralph M. Siegel, Heather L. Read:
Deterministic dynamics emerging from a cortical functional architecture. 697-713 - Simon J. Thorpe, Arnaud Delorme, Rufin Van Rullen:
Spike-based strategies for rapid processing. 715-725 - D. Chawla, Karl J. Friston, Erik D. Lumer:
Zero-lag synchronous dynamics in triplets of interconnected cortical areas. 727-735 - Peter A. Cariani:
Neural timing nets. 737-753 - Timothy K. Horiuchi, Kai M. Hynna:
Spike-based VLSI modeling of the ILD system in the echolocating bat. 755-762 - Andreas Knoblauch, Günther Palm:
Pattern separation and synchronization in spiking associative memories and visual areas. 763-780 - David H. Goldberg, Gert Cauwenberghs, Andreas G. Andreou:
Probabilistic synaptic weighting in a reconfigurable network of VLSI integrate-and-fire neurons. 781-793 - Arnaud Delorme, Simon J. Thorpe:
Face identification using one spike per neuron: resistance to image degradations. 795-803 - Christian Leibold, J. Leo van Hemmen:
Temporal receptive fields, spikes, and Hebbian delay selection. 805-813 - Nir Levy, David Horn, Isaac Meilijson, Eytan Ruppin:
Distributed synchrony in a cell assembly of spiking neurons. 815-824 - Friedrich T. Sommer, Thomas Wennekers:
Associative memory in networks of spiking neurons. 825-834 - Nanayaa Twum-Danso, Roger W. Brockett:
Trajectory estimation from place cell data. 835-844 - Mark D. Humphries, Kevin N. Gurney:
A pulsed neural network model of bursting in the basal ganglia. 845-863 - Pablo Varona, Joaquín J. Torres, Ramón Huerta, Henry D. I. Abarbanel, Mikhail I. Rabinovich:
Regularization mechanisms of spiking-bursting neurons. 865-875 - Michael G. Paulin, Larry F. Hoffman:
Optimal firing rate estimation. 877-881 - Eugene M. Izhikevich:
Resonate-and-fire neurons. 883-894 - Khashayar Pakdaman, Seiji Tanabe, Tetsuya Shimokawa:
Coherence resonance and discharge time reliability in neurons and neuronal models. 895-905 - Jonghan Shin:
Adaptation in spiking neurons based on the noise shaping neural coding hypothesis. 907-919 - Gee-Hyuk Lee, Nabil H. Farhat:
The double queue method: a numerical method for integrate-and-fire neuron networks. 921-932 - Nicolangelo Iannella, Andrew D. Back:
A spiking neural network architecture for nonlinear function approximation. 933-939 - Marc de Kamps, Frank van der Velde:
From artificial neural networks to spiking neuron populations and back again. 941-953 - Jianfeng Feng:
Is the integrate-and-fire model good enough?--a review. 955-975
Volume 14, Number 8, October 2001
- Tianping Chen:
Global exponential stability of delayed Hopfield neural networks. 977-980 - Silvia Corchs, Gustavo Deco:
A neurodynamical model for selective visual attention using oscillators. 981-990 - Kevin N. Gurney:
Information processing in dendrites: I. Input pattern generalisation. 991-1004 - Kevin N. Gurney:
Information processing in dendrites: II. Information theoretic complexity. 1005-1022 - Dirk Tomandl, Andreas Schober:
A Modified General Regression Neural Network (MGRNN) with new, efficient training algorithms as a robust 'black box'-tool for data analysis. 1023-1034 - J. Gareth Polhill, Michael K. Weir:
An approach to guaranteeing generalisation in neural networks. 1035-1048 - Sumio Watanabe:
Algebraic geometrical methods for hierarchical learning machines. 1049-1060 - Ruye Wang:
A hybrid learning network for shift-invariant recognition. 1061-1073 - Nikolaos Ampazis, Stavros J. Perantonis, John G. Taylor:
A dynamical model for the analysis and acceleration of learning in feedforward networks. 1075-1088 - Tomas Hrycej:
Estimates of average complexity of neurocontrol algorithms. 1089-1098 - Michel Pasquier, Hiok Chai Quek, Mary Toh:
Fuzzylot: a novel self-organising fuzzy-neural rule-based pilot system for automated vehicles. 1099-1112 - A. J. B. Travis:
Real time distributed processing of multiple associated pulse pattern sequences. 1113-1127 - Vladimir Cherkassky, Steven Kilts:
Myopotential denoising of ECG signals using wavelet thresholding methods. 1129-1137
Volume 14, Number 9, November 2001
- Yutaka Sakai:
Neuronal integration mechanisms have little effect on spike auto-correlations of cortical neurons. 1145-1152 - Amir Karniel, Ron Meir, Gideon F. Inbar:
Best estimated inverse versus inverse of the best estimator. 1153-1159 - Yacine Oussar, Gérard Dreyfus:
How to be a gray box: dynamic semi-physical modeling. 1161-1172 - Kenji Okajima:
An Infomax-based learning rule that generates cells similar to visual cortical neurons. 1173-1180 - Chunhua Feng, Réjean Plamondon:
On the stability analysis of delayed neural networks systems. 1181-1188 - Nobuhiko Ikeda, Paul Watta, Metin Artiklar, Mohamad H. Hassoun:
A two-level Hamming network for high performance associative memory. 1189-1200 - Ashit Talukder, David P. Casasent:
A closed-form neural network for discriminatory feature extraction from high-dimensional data. 1201-1218 - Giuseppe Patanè, Marco Russo:
The enhanced LBG algorithm. 1219-1237 - Shin Ishii, Masa-aki Sato:
Reconstruction of chaotic dynamics by on-line EM algorithm. 1239-1256 - Alan F. Murray:
Novelty detection using products of simple experts--a potential architecture for embedded systems. 1257-1264 - Md. Monirul Islam, Kazuyuki Murase:
A new algorithm to design compact two-hidden-layer artificial neural networks. 1265-1278 - Anna Koufakou, Michael Georgiopoulos, Georgios C. Anagnostopoulos, Takis Kasparis:
Cross-validation in Fuzzy ARTMAP for large databases. 1279-1291 - Khaled Ahmed Nagaty:
Fingerprints classification using artificial neural networks: a combined structural and statistical approach. 1293-1305 - Hiroshi Wakuya, Jacek M. Zurada:
Bi-directional computing architecture for time series prediction. 1307-1321
Volume 14, Number 10, December 2001
- Antony Browne, Ron Sun:
Connectionist inference models. 1331-1355 - Mark E. Jackson, Oleg Litvak, James W. Gnadt:
Analysis of the frequency response of the saccadic circuit: numerical simulations. 1357-1376 - Tianping Chen, Shun-ichi Amari:
Unified stabilization approach to principal and minor components extraction algorithms. 1377-1387 - Kotaro Hirasawa, Sung-Ho Kim, Jinglu Hu, Junichi Murata, Min Han, Chunzhi Jin:
Improvement of generalization ability for identifying dynamical systems by using universal learning networks. 1389-1404 - John A. Flanagan:
Self-organization in the one-dimensional SOM with a decreasing neighborhood. 1405-1417 - Katsuyuki Hagiwara, Taichi Hayasaka, Naohiro Toda, Shiro Usui, Kazuhiro Kuno:
Upper bound of the expected training error of neural network regression for a Gaussian noise sequence. 1419-1429 - Hiok Chai Quek, Ruowei Zhou:
The POP learning algorithms: reducing work in identifying fuzzy rules. 1431-1445 - Sergio Bermejo, Joan Cabestany:
Oriented principal component analysis for large margin classifiers. 1447-1461 - Yuming Chen:
A remark on 'On stability of nonlinear continuous-time neural networks with delays'. 1463- - Michael Schmitt:
On using the Poincaré polynomial for calculating the VC dimension of neural networks. 1465- - Martha A. Carter, Mark E. Oxley:
Response: on using the Poincaré polynomial for calculating the V-C dimension of neural networks. 1467-
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.