Breadcrumb

Information Loss in Neural Classifiers from Sampling

Dr. Brandon Foggo, Post-doc, Smart City Innovation Laboratory, UCR
ABSTRACT –

An estimator is limited to the information that it has about the variable it's estimating. But this information is limited to what the estimator has seen from the samples training it. The full information of a random variable cannot be transferred to an estimator by finite samples - some information is lost. This presentation analyzes such losses for neural network classifiers. Analyzing these losses can lead to improved architecture designs, improved training data selection strategies, and provide explanations for empirical results in machine learning theory.

Tags