Event type: Seminar
Events
Anita Layton
(University of Waterloo)
His and Her Mathematical Models of Physiological Systems
Show/Hide Abstract
Imagine someone having a heart attack. Do you visualize the dramatic Hollywood portrayal of a heart attack, in which a man collapses, grabbing his chest in agony? Even though heart disease is the leading killer of women worldwide, the misconception that heart disease is a men’s disease has persisted. A dangerous misconceptions and risks women ignoring their own symptoms. Gender biases and false impressions are by no means limited to heart attack symptoms. Such prejudices exist throughout our healthcare system, from scientific research to disease diagnosis and treatment strategies. A goal of our research program is to address this gender equity, by identifying and disseminating insights into sex differences in health and disease, using computational modeling tools.
Marte Julie Sætra
(Simula Research Laboratory)
Computational modeling of ion concentration dynamics in brain tissue
Show/Hide Abstract
Over the past decades, computational neuroscientists have developed ever more sophisticated and morphologically complex neuron models. Most of these models assume that the intra- and extracellular ion concentrations remain constant over the simulated period and thus do not account for concentration-dependent effects on neuronal firing properties. Of the models that do incorporate ion concentration dynamics, few account for the electrodiffusive nature of intra- and extracellular ion transport. In this talk, I will present the first multicompartmental neuron model that accounts for ion concentration dynamics in a biophysically consistent manner [1]. I will also show how electrodiffusive modeling of neurons and glial cells can be used to explore the genesis of slow potentials in the brain [2].
Cengiz Pehlevan
(Harvard University)
Inductive Bias of Neural Networks
Show/Hide Abstract
Predicting a previously unseen example from training examples is unsolvable without additional assumptions about the nature of the task at hand. A learner’s performance depends crucially on how its internal assumptions, or inductive biases, align with the task. I will present a theory that describes the inductive biases of neural networks using kernel methods and statistical mechanics. This theory elucidates an inductive bias to explain data with “simple functions, which are identified by solving a related kernel eigenfunction problem on the data distribution. This notion of simplicity allows us to characterize whether a network is compatible with a learning task, facilitating good generalization performance from a small number of training examples. I will present applications of this theory to artificial and biological neural systems, and real datasets.
0
0
https://mathbio.sas.upenn.edu/event_listing_type/seminar/page/2/
https://mathbio.sas.upenn.edu/event_listing_type/seminar/