Lab Home | Phone | Search
Center for Nonlinear Studies  Center for Nonlinear Studies
 Home 
 People 
 Current 
 Affiliates 
 Visitors 
 Students 
 Research 
 ICAM-LANL 
 Publications 
 Conferences 
 Workshops 
 Sponsorship 
 Talks 
 Colloquia 
 Colloquia Archive 
 Seminars 
 Postdoc Seminars Archive 
 Quantum Lunch 
 Quantum Lunch Archive 
 CMS Colloquia 
 Q-Mat Seminars 
 Q-Mat Seminars Archive 
 P/T Colloquia 
 Archive 
 Kac Lectures 
 Kac Fellows 
 Dist. Quant. Lecture 
 Ulam Scholar 
 Colloquia 
 
 Jobs 
 Postdocs 
 CNLS Fellowship Application 
 Students 
 Student Program 
 Visitors 
 Description 
 Past Visitors 
 Services 
 General 
 
 History of CNLS 
 
 Maps, Directions 
 CNLS Office 
 T-Division 
 LANL 
 
Wednesday, March 12, 2014
3:00 PM - 4:00 PM
CNLS Conference Room (TA-3, Bldg 1690)

Seminar

Optimal and Predictive Classification and Error Estimation

Lori A. Dalton
Ohio State University

Informatics based discovery can be greatly accelerated by integrating existing theory with big data, particularly when sample points are expensive or difficult to acquire. This is because the predictive capacity of a classifier is quantified by its error rate, and a purely data-driven approach requires a large number of examples to guarantee accurate error estimation. Realizing the need for a general model-based framework to integrate scientific knowledge on the mechanisms governing the behavior of a system with observable data, in this talk I will present a Bayesian approach for optimal and predictive classification and classifier error estimation.

The basis of the method is to construct a prior distribution over an uncertainty class of probabilistic models, effectively constraining the relationship between observations and the decision to be made with a higher weight on models most consistent with available scientific knowledge. Using Bayesian estimation principles, this prior is integrated with observed data to produce a posterior distribution. We then formulate classification and error estimation as optimization problems in this Bayesian framework, leading to (1) optimal classifiers, (2) optimal minimum-mean-square-error (MMSE) estimators of classifier error, and (3) a sample-conditioned mean-square-error (MSE) quantifying the accuracy of error estimation. In essence, this work puts forth a rigorous methodology to find these optimized tools, all taking into account both theoretical and empirical knowledge available.

Host: Turab Lookman, txl@lanl.gov