Speaker Series

Attend the NeurReps Global Speaker Series

Kicking off our upcoming workshop at NeurIPS 2024, we're excited to announce the NeurReps Global Speaker Series!


Designed to foster a truly global community, the NeurReps Speaker Series is a rotating, biweekly, hybrid seminar series. By hosting and live-streaming talks from various international institutions, we aim to increase geographic diversity and broaden our worldwide network of researchers. Talks will be made publicly available on our website and various streaming services, ensuring widespread access to these valuable insights.


Join us as we build momentum towards the 3rd edition of the NeurReps Workshop at NeurIPS 2024, exploring the fascinating convergence of mathematical structures in neural systems and advancing our understanding of information processing in brains and machines.


Upcoming Seminars

October 17, 2024 

11 am EST

IDEAS Initiative @UPenn


Equivariant Neural Inertial Odometry

ABSTRACT 

In this talk, we introduce a new class of problems related to integrating inertial measurements obtained from an IMU that play a significant role in navigation combined with visual data. While there have been tremendous technological advances in the precision of instrumentation, integrating acceleration and angular velocity still suffers from drift in the displacement estimates. Neural networks have come to the rescue in estimating displacement and the associated uncertainty covariance. However, such networks do not consider the physical roto reflective symmetries inherent in IMU data, leading to the need to memorize the same priors for every possible motion direction, which hinders generalization. In this work, we characterize these symmetries and show that the IMU data and the resulting displacement and covariance transform equivariantly when rotated around the gravity vector and reflected with respect to arbitrary planes parallel to gravity. We propose a network for predicting an equivariant gravity aligned frame from equivariant vectors and invariant scalars derived from IMU data, leveraging expressive linear and non-linear layers tailored to commute with the underlying symmetry transformation. Such a canonical frame can precede existing architectures that are end-to-end or filter-based. We will include an introduction to the inertial filtering problem and we will present results in real-world datasets.

Yinshuang Xu  - University of Pennsylvania 

Yinshuang Xu is currently pursuing her fifth year of PhD studies in Computer and Information Science at the University of Pennsylvania, where she is advised by Prof. Kostas Daniilidis. She previously graduated from Shanghai Jiao Tong University with a bachelor’s degree in Engineering Mechanics and later received her master’s in Robotics from the University of Pennsylvania. Her research interests include equivariance and geometric deep learning, with a focus on their use in computer vision and machine learning.





Kostas Daniilidis  - University of Pennsylvania 

Kostas Daniilidis is the Ruth Yalom Stone Professor of Computer and Information Science at the University of Pennsylvania where he has been faculty since 1998. He is an IEEE Fellow. He obtained his undergraduate degree in Electrical Engineering from the National Technical University of Athens, 1986, and his PhD in Computer Science from the University of Karlsruhe, 1992, under the supervision of Hans-Hellmut Nagel.   He received the Best Conference Paper Award at ICRA 2017. He co-chaired ECCV 2010 and 3DPVT 2006. His most cited works have been on event-based vision, equivariant learning, 3D human pose, and hand-eye calibration



Acknowledgements  The present talk has been made possible thanks to the help of Ryan Chan (University of Pennsylvania) who served as a volunteer for locally hosting and recording the seminar. We'd like to further acknowledge René Vidal (University of Pennsylvania) and the Innovation in Data Engineering and Science (IDEAS) Initiative for supporting the NeurReps Speaker Series.

October 28, 2024 

11 am EST

Room TBA @MIT


Pushing the Limits of Equivariant Neural Networks

ABSTRACT 

The performance of modern deep models relies on vast datasets to train, but in many domains data is scarce or difficult to collect. Incorporating symmetry constraints into neural networks has resulted in models called equivariant neural networks (ENNs) which have helped improve sample efficiency. As an application we consider equivariant policy learning which can be used to train robots using fewer iterations for reinforcement learning or fewer demonstrations for imitation learning.  We will also discuss the limitations of standard equivariant learning, which assumes a known group action and suggest methods to circumvent this assumption. 

Robin Walters  - Northeastern University

Robin Walters is an assistant professor in the Khoury College of Computer Sciences at Northeastern University, where he leads the Geometric Learning Lab. Robin’s research seeks to develop a fundamental understanding of the role symmetry plays in deep learning and to exploit this to improve the generalization and data efficiency of deep learning methods. This includes designing equivariant neural networks, symmetry discovery methods, and creating a theory of symmetry for model parameters.  He has applied these methods to improve models in domains with complex dynamics including climate science, transportation, and robotics




Dian Wang  - Northeastern University

Dian Wang is a Ph.D. candidate at the Khoury College of Computer Sciences, Northeastern University, where he is co-advised by Prof. Robert Platt and Prof. Robin Walters. His research lies at the intersection of Machine Learning and Robotics, with a particular focus on Geometric Deep Learning and its applications in Robot Learning. Recently, Dian has focused on enhancing robotic manipulation through the use of equivariant methods to boost learning efficiency and performance. Dian has contributed to leading conferences and journals, including ICLR, NeurIPS, CoRL, ICRA, RSS, IJRR, AR, ISRR, and AAMAS. Dian was awarded the JPMorgan Ph.D. Fellowship in 2023 and the Khoury Research Fellowship in 2019.




Acknowledgements   The present talk has been made possible thanks to the help of Behrooz Tahmasebi (MIT) who served as a volunteer for locally hosting and recording the seminar. 

November 7, 2024 

Time TBA


Title TBA

ABSTRACT 

TBA

Melanie Weber - Harvard University

I am an Assistant Professor of Applied Mathematics and of Computer Science at Harvard, where I lead the Geometric Machine Learning Group. My research focuses on utilizing geometric structure in data for the design of efficient Machine Learning and Optimization methods with provable guarantees. In 2021-2022, I was a Hooke Research Fellow at the Mathematical Institute in Oxford and a Nicolas Kurti Junior Research Fellow at Brasenose College. In Fall 2021, I was a Research Fellow at the Simons Institute in Berkeley, where I participated in the program Geometric Methods for Optimization and Sampling. Previously, I received my PhD from Princeton University (2021) under the supervision of Charles Fefferman, held visiting positions at MIT and the Max Planck Institute for Mathematics in the Sciences and interned in the research labs of Facebook, Google and Microsoft. My research is supported by the National Science Foundation, the Sloan Foundation and the Harvard Data Science Initiative.



Acknowledgements