Lab Home | Phone | Search | ||||||||
|
||||||||
Machine learning algorithms for identifying dependency networks are being applied to data in biology to learn protein correlations and neuroscience to learn brain pathways associated with development, adaptation and disease. Yet, rarely is there sufficient data to infer robust individual networks at each stage of development or for each disease/control population. Therefore, these multiple networks must be considered simultaneously; dramatically expanding the space of solutions for the learning problem. Standard machine learning objectives find parsimonious solutions that best fit the data; yet with limited data, there are numerous solutions that are nearly score-equivalent. Effectively exploring these complex solution spaces requires input from the domain scientist to refine the objective function.
In this talk, I present transfer learning algorithms for both Bayesian networks and graphical lasso that reduce the variance of solutions. By incorporating human input in the transfer bias objective, the topology of the solution space is shaped to help answer knowledge-based queries about the confidence of dependency relationships that are associated with each population. I also describe an interactive human-in-the-loop approach that allows a human to react to machine-learned solutions and give feedback to adjust the objective function. The result is a solution to an objective function that is jointly defined by the machine and a human. Case studies are presented in two areas: functional brain networks associated with learning stages and with mental illness; and plasma protein concentration dependencies associated with cancer. |