Fahad Khalid's thesis focuses on an important machine learning problem, namely the optimization of supervised learning algorithms for the problem at hand. More specifically, the thesis investigates whether it is possible to modify the inductive bias of Back-propagated Artificial Neural Networks by replacing the internal performance error rate metric in order to optimize an arbitrary metric (measure-based learning).
Khalid carries out an extensive analysis of related work and theoretically investigates this difficult problem. The main contribution of this thesis is a detailed account of the issues that need to be considered in order to modify the inductive bias of neural networks. The main conclusion is that neither the standard error back-propagation mechanism nor the gradient descent method is suitable for measure-based learning. Hence, a network may be used for representation but one has to apply an alternative training/learning mechanism.Niklas Lavesson, Blekinge Institute of Technology
Page last modified: January 15 2009 09:20:47.
Copyright © 2011 SAIS
Contact the webmaster