The Modeling Competition is a two-week workshop that aims to provide researchers across a variety of fields and backgrounds with the necessary toolkit to use computational models for hypothesis testing and quantitative fitting of behavioral data and brain-behavior relationships. Topics covered include model validation, model selection, posterior predictive checks, maximum likelihood, hierarchical estimation and neural regressors.
The Modeling Competition workshops include daily lectures and discussion sessions, as well as hands-on tutorials that provide students with code templates and a chance to practice writing, fitting and testing computational models.
During the second week, participants are provided with a novel dataset that integrates across multiple aspects of cognition and perception. They then participate in a collaborative modeling challenge in which they construct and test their own models. Models from each entrant are evaluated on held-out data, and prizes are given for best predictive power, rigor, creativity and innovation.
The Modeling Competition workshops are useful for computation novices as well as those with a more advanced computational background, who have not mastered the science of model selection and parameter estimation or would like to learn more about specific classes of models.
For more information, email Michael Frank, co-director of the Center for Computational Brain Science.