Derivative Free Training of Recurrent Neural Networks - A Comparison of Algorithms and Architectures
Branimir Todorović, Miomir Stanković, Claudio Moraga
2014
Abstract
The problem of recurrent neural network training is considered here as an approximate joint Bayesian estimation of the neuron outputs and unknown synaptic weights. We have implemented recursive estimators using nonlinear derivative free approximation of neural network dynamics. The computational efficiency and performances of proposed algorithms as training algorithms for different recurrent neural network architectures are compared on the problem of long term, chaotic time series prediction.
DownloadPaper Citation
in Harvard Style
Todorović B., Stanković M. and Moraga C. (2014). Derivative Free Training of Recurrent Neural Networks - A Comparison of Algorithms and Architectures . In Proceedings of the International Conference on Neural Computation Theory and Applications - Volume 1: NCTA, (IJCCI 2014) ISBN 978-989-758-054-3, pages 76-84. DOI: 10.5220/0005081900760084
in Bibtex Style
@conference{ncta14,
author={Branimir Todorović and Miomir Stanković and Claudio Moraga},
title={Derivative Free Training of Recurrent Neural Networks - A Comparison of Algorithms and Architectures},
booktitle={Proceedings of the International Conference on Neural Computation Theory and Applications - Volume 1: NCTA, (IJCCI 2014)},
year={2014},
pages={76-84},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0005081900760084},
isbn={978-989-758-054-3},
}
in EndNote Style
TY - CONF
JO - Proceedings of the International Conference on Neural Computation Theory and Applications - Volume 1: NCTA, (IJCCI 2014)
TI - Derivative Free Training of Recurrent Neural Networks - A Comparison of Algorithms and Architectures
SN - 978-989-758-054-3
AU - Todorović B.
AU - Stanković M.
AU - Moraga C.
PY - 2014
SP - 76
EP - 84
DO - 10.5220/0005081900760084