Speaker Series
Attend the NeurReps Global Speaker Series
Kicking off our upcoming workshop at NeurIPS 2024, we're excited to announce the NeurReps Global Speaker Series!
Designed to foster a truly global community, the NeurReps Speaker Series is a rotating, biweekly, hybrid seminar series. By hosting and live-streaming talks from various international institutions, we aim to increase geographic diversity and broaden our worldwide network of researchers. Talks will be made publicly available on our website and various streaming services, ensuring widespread access to these valuable insights.
Join us as we build momentum towards the 3rd edition of the NeurReps Workshop at NeurIPS 2024, exploring the fascinating convergence of mathematical structures in neural systems and advancing our understanding of information processing in brains and machines.
Upcoming Seminars
November 21, 2024
10 AM (EST) - 4 PM (CET)
L3.36, Science Park (@University of Amsterdam)
Geometry-Grounded Representation Learning
ABSTRACT
Despite their unprecedented scale contemporary AI systems, exhibit fundamental limitations in geometric reasoning and physical understanding. Further scaling up models will not fix these issues. To enable geometric reasoning, we must introduce new ideas for learning reliable representations by explicitly preserving geometric structure. In this talk we will present an overview of our recent work toward geometrically grounded models including a generalized notion of weight-sharing as sharing operations over equivalence classes of point-pairs, a designable latent space through isometry learning and pull-back geometry, and finally we will highlight our work on equivariant neural fields. The neural field's equivariance relation ensures that the latent variables are geometrically meaningful, which we show through various classification and segmentation experiments. Finally, we subsequently show that this allows for continuous PDE forecasting entirely in latent space. That is, instead of modeling a PDE on the fields, we model equivariant dynamics through equivariant neural ODEs applied to the geometric latent variables.
Erik Bekkers - University of Amsterdam
Erik Bekkers is an assistant professor in Geometric Deep Learning at the Amsterdam Machine Learning Lab (AMLab), University of Amsterdam. His research focuses on making AI systems intrinsically capable of geometric reasoning through geometry-grounded representation learning. Before his current position, he worked as a post-doctoral researcher in applied differential geometry at the Technical University Eindhoven (TU/e), where he also completed his PhD cum laude in Biomedical Engineering. His research has been recognized through several prestigious awards, including two personal grants from the Dutch Research Council (NWO): a Veni grant (2019) for developing context-aware AI and a Vidi grant (2024) for his work on geometry-grounded learning. He received the MICCAI Young Scientist Award (2018), the Philips Impact Award (2018), and was named ELLIS Scholar (2023) within the Geometric Deep Learning program. Erik is known for his contributions to the field of equivariant deep learning and its applications in AI for Science and Medicine, with his commitment to education and open science reflected in his widely-adopted educational content on equivariant deep learning.
David Knigge - University of Amsterdam
David Knigge is a 3rd year Ph. D. candidate at the Video Image Sense Lab at University of Amsterdam with dr. Efstratios Gavves. His research focuses on unifying Computer Vision methods by modelling data as continuous functions, and efficient deep learning through geometric principles. David contributed to leading conferences including ICML, ICLR, CVPR, NeurIPS and MIDL. Lastly, David is especially proud of the fact that he has been elected outstanding reviewer for both NeurIPS and ICML.
Acknowledgements The present talk has been made possible thanks to the help of Chase van de Geijn (University of Amsterdam) who served as a volunteer for locally hosting and recording the seminar.