TECHNICAL REPORT ON RESEARCH ACTIVITES IN LAFORIA-IBP, University PARIS VI in a period 15 February - 15 August 1994.The technical report also contains three articles:

S. Raudys

IBP-Laforia 1995/17: Rapport de Recherche Laforia / Laforia research reports
52 pages - Juin/June 1995 - Document en anglais.

PostScript : 8 Ko /Kb

Titre / Title: TECHNICAL REPORT ON RESEARCH ACTIVITES IN LAFORIA-IBP, University PARIS VI in a period 15 February - 15 August 1994.The technical report also contains three articles:


Résumé :

Abstract : Unexpected small sample properties of the linear perceptrons
S. Raudys
(16p anglais)
Abstract: An analytical expression for the generalization error of a zero empirical error classifier is derived. It shown that small training set behavior of the linear perceptron essentially differs from that of conventional parametric classification rules: for parametric linear and quadratic discriminant functions the dimensionality/samples size ratio is p/N and p2/N, for the zero empirical error classifier we have significantly better ratio p/N2. Asymptotic formula for the generalization error explain margin A influence on the generalization error. Equation obtained shows that the training process first starts with a decreasing the generalization error, but can later increase it.The linear perceptron trained by a proposed "targets 0.4 & 0.0001 " strategy can design complex nonlinear decision boundaries and can be useful in solving nonlinear classification problems.

Optimal regularization of linear and nonlinear perceptrons
S. Raudys, M. Skurichina,T. Cibas,P. Gallinari
(10p anglais)
Abstract. We derive an analytical formula for the generalization error of linear adaptive classifiers trained with weight decay. Analytical and experimental results are then presented to analyze the optimal value of regularization parameters as a function of the training set size.

Generalization errors of Adaptive Linear Classifiers
S. Raudys
(26p anglais)
The cost function of the perceptron which minimizes a squared loss function with nonlinearity depends upon target values: in one extreme case it is close to ADALINE and a standard Fisher linear classifier and in another one  to a minimum empirical error classifier. An analytical equation for a generalization error of the "zero empirical error classifier" is derived for a case when the distributions of pattern classes share a common covariance matrix. It is shown that for small training set, the behavior of this classifier deeply differs from that of conventional parametric classifiers: for Fisher classifier the dimensionality/sample size ratio is ,p/N, for the zero empirical error classifier in spherical pattern distribution case we have significantly better ratio - p/N2. A successful initialization of the linear perceptron contains a large amount of information which can dramatically reduce the generalization error. The asymptotic formula obtained explains the influence of targets on the generalization error: the training process first decreases the generalization error, but can later increase it. A new training strategy with varying targets, called "targets 0.4 and 0.0001" technique is proposed to improve generalization.


Publications internes Laforia 1995 / Laforia research reports 1995