Event category: Spring 2021
(Institut des Hautes Études Scientifiques [IHES])
Cancer cells and their epithelial neighborsShow/Hide Abstract
I am a cancer researcher working on a framework for a new mathematical model of metastasis. The reason for making a new model is that, in mice, metastases can grow from cells that are still normal at the time of arriving to the future metastasis site. If the same happens in humans, it would be pretty important for cancer therapy. First, it would explain why anti-cancer therapies fail to prevent metastases in some patients: if the cells are yet non-malignant at the time of therapy, they would be spared by the treatment. Second, it may be possible to identify improved treatments based on the ability to kill these non-malignant cells. I hypothesize that dissemination of non-malignant epithelial cells occurs in parallel with tumor cells, and subsequent transformation at the ectopic sites is a source of some metastases. This scenario ties together two contradictory observations: that metastases are associated with large and rapidly growing primary tumors, while metastatic tumors themselves often take a long time to appear. During my recent sabbatical at the Institut des Hautes Etudes Scientifiques, together with Misha Gromov and his colleagues I started to interrogate publicly available data on mutation rates and/or profiles from cancer patients and healthy individuals to determine whether the earliest common ancestor predicted using phylogenetic methods in some primary-metastasis pairs has features of a non-malignant cell.
Apr 20, 2021 Online Seminar
Inductive Bias of Neural NetworksShow/Hide Abstract
Predicting a previously unseen example from training examples is unsolvable without additional assumptions about the nature of the task at hand. A learner’s performance depends crucially on how its internal assumptions, or inductive biases, align with the task. I will present a theory that describes the inductive biases of neural networks using kernel methods and statistical mechanics. This theory elucidates an inductive bias to explain data with “simple functions, which are identified by solving a related kernel eigenfunction problem on the data distribution. This notion of simplicity allows us to characterize whether a network is compatible with a learning task, facilitating good generalization performance from a small number of training examples. I will present applications of this theory to artificial and biological neural systems, and real datasets.
Apr 27, 2021 Online Seminar