Past Workshops
NeurReps 2023
The first NeurReps Workshop was held at NeurIPS 2023 on December 16th in New Orleans. The workshop featured five invited talks, eleven contributed talks, two discussion panels, and 64 accepted submissions presented as posters. All recordings of the day's talks, as well as accepted papers and abstracts, can be found here.
NeurReps 2022
The first NeurReps Workshop was held at NeurIPS 2022 on December 3rd in New Orleans. The workshop featured six invited talks, eleven contributed talks, two discussion panels, and 67 accepted submissions presented as posters. All recordings of the day's talks, as well as accepted papers and abstracts, can be found here.
Invited Speakers & Panelists
Bruno Olshausen
UC Berkeley
Irina Higgins
DeepMind
Taco Cohen
Qualcomm
Erik Bekkers
UvA
Rose Yu
UC San Diego
Kristopher Jensen
Cambridge
Gabriel Kreiman
Harvard
Manu Madhav
UBC
Schedule
8:15 - 8:30
Opening Remarks
Sophia Sanborn
Session 1:
Symmetry and Laws of Neural Representation
Symmetry and Laws of Neural Representation
8:30 - 9:00
In search of invariance in brains and machines
Bruno Olshausen
9:00 - 9:30
Symmetry-based representations for artificial and biological intelligence
Irina Higgins
9:30 - 10:00
From equivariance to naturality
Taco Cohen
10:00 - 10:30
Coffee Break
Contributed Talks
10:30 - 10:40
Is the information geometry of probabilistic population codes learnable?
Vastola, Cohen, Drugowitsch
10:40 - 10:50
Computing Representations for Lie Algebraic Networks
Shutty, Wierzynski
10:50 - 11:00
Kendall Shape-VAE : Learning Shapes in a Generative Framework
Vadgama, Tomczak, Bekkers
11:00 - 11:05
Equivariance with Learned Canonical Mappings
Kaba, Mondal, Zhang, Bengio, Ravanbakhsh
11:05 - 11:10
Capacity of Group-invariant Linear Readouts from Equivariant Representations:
How Many Objects can be Linearly Classified Under All Possible Views?
How Many Objects can be Linearly Classified Under All Possible Views?
Farrell, Bordelon, Trivedi, Pehlevan
11:10 - 11:15
Do Neural Networks Trained with Topological Features Learn Different Internal Representations?
McGuire, Jackson, Emerson, Kvinge
11:15 - 11:20
Expander Graph Propagation
Deac, Lackenby, Veličković
11:20 - 11:25
Homomorphism AutoEncoder ---
Learning Group Structured Representations from Observed Transitions
Learning Group Structured Representations from Observed Transitions
Keurti, Pan, Besserve, Grewe, Schölkopf
11:25 - 11:30
Sheaf Attention Networks
Barbero, Bodnar, Sáez de Ocáriz Borde, Lió
11:30 - 11:35
On the Expressive Power of Geometric Graph Neural Networks
Joshi, Bodnar, Mathis, Cohen, Liò
Panel Discussion I:
Geometric and topological principles for representation learning in ML
Geometric and topological principles for representation learning in ML
11:35 - 12:05
Panelists
Irina Higgins, Taco Cohen, Erik Bekkers, Rose Yu
Moderator
Nina Miolane
12:05 - 1:30
Lunch Break
Session II:
Latent Geometry in Neural Systems
Latent Geometry in Neural Systems
1:30 - 2:00
Generative models of non-Euclidean neural population dynamics
Kristopher Jensen
2:00 - 2:30
Robustness of representations in artificial and biological neural networks
Gabriel Kreiman
2:30 - 3:00
Neural Ideograms and Equivariant Representation Learning
Erik Bekkers
Panel Discussion II:
Geometric and topological principles for representations in the brain
Geometric and topological principles for representations in the brain
3:00 - 3:30
Panelists
Bruno Olshausen, Kristopher Jensen, Gabriel Krieman, Manu Madhav
Moderator
Christian Shewmake
Poster Session
Ballroom A/B
Ballroom A/B
3:30 - 5:00
Poster Session
Contributing Authors