Context dependent learning in neural networks

L.J. Spreeuwers, B.J. Van Der Zwaag, F. Van Der Heijden

Research output: Chapter in Book/Report/Conference proceedingContribution to conference proceedingAcademicpeer-review

Abstract

In this paper an extension to the standard error back-propagation learning rule for multi-layer feed forward neural networks is proposed, that enables them to trained for context dependent information. The context dependent learning is realised by using a different error function (called Average Risk: AVR) in stead of the sum of squared errors (SQE) normally used in error backpropagation and by adapting the update rules. It is shown that for applications where this context dependent information is important, a major improvement in performance is obtained.
Original languageEnglish
Title of host publicationFifth International Conference on Image Processing and its Applications, 1995.
PublisherIET
Pages632-636
Number of pages5
ISBN (Print)0-85296-642-3
DOIs
Publication statusPublished - 6 Jul 1995
Externally publishedYes
EventFifth International Conference on Image Processing and its Applications, 1995. - Edinburgh
Duration: 4 Jul 19956 Jul 1995

Conference

ConferenceFifth International Conference on Image Processing and its Applications, 1995.
Period4/07/956/07/95

Keywords

  • backpropagation
  • feedforward neural networks
  • multilayer perceptrons

Fingerprint

Dive into the research topics of 'Context dependent learning in neural networks'. Together they form a unique fingerprint.

Cite this