graduate student studying machine learning and neuroscience
I’m a fourth year PhD student studying machine learning and theoretical neuroscience at the Gatsby Computational Neuroscience Unit, where I’m fortunate to be advised by Maneesh Sahani and Matt Botvinick. Broadly speaking, I’m primarily interested in the complex interrelationship between biological and artificial intelligence and in leveraging the algorithms used by the brain to improve machine learning, particularly reinforcement learning (RL). Right now, I’m focused on multitask and meta-RL with a particular focus on enabling agents to identify and exploit shared structure among tasks. I’m also excited by optimization theory and its applications to RL. Last summer, I was lucky enough to do an internship with Tom Zahavy as part of the Discovery team at DeepMind, where I worked on optimization for constrained RL.
Before arriving at Gatsby, I worked with the Horizons and Deep Collective teams at Uber AI Labs on optimization techniques for deep learning. Previously, I was a Masters student in computer science at Columbia, where I worked with Larry Abbott, Ashok Litwin-Kumar, and Ken Miller at the Center for Theoretical Neuroscience on biologically plausible learning rules and architectures for deep learning. I did my undergrad at Princeton, where I majored in neuroscience with minors in computer science and linguistics, and was advised by Jonathan Pillow.
Google Scholar | CV |
Email: ted@gatsby.ucl.ac.uk
Moskovitz T, O’Donoghue B, Veeriah V, Flennerhag S, Singh S, Zahavy T (2023). ReLOAD: Reinforcement Learning with Optimistic Ascent-Descent for Last-Iterate Convergence in Constrained MDPs. Preprint. paper | website
Moskovitz T, Kao T, Sahani M*, Botvinick M* (2023). Minimum Description Length Control. International Conference on Learning Representations (ICLR). *Equal Contribution. To appear. paper
Moskovitz T, Miller K, Sahani M, Botvinick M (2022). A Unified Theory of Dual Process Control. Preprint. paper
Moskovitz T, Arbel M, Parker-Holder J, Pacchiano A (2022). Towards an Understanding of Default Policies in Multitask Policy Optimization. Conference on Artificial Intelligence and Statistics (AISTATS). paper (Best Paper Award Honorable Mention)
Moskovitz T, Wilson SR, Sahani M (2022). A First-Occupancy Representation for Reinforcement Learning. International Conference on Learning Representations (ICLR). paper | code | talk at Cosyne
Moskovitz T, Parker-Holder J, Pacchiano A, Arbel M, Jordan MI (2021). Tactical Optimism and Pessimism for Deep Reinforcement Learning. Neural Information Processing Systems (NeurIPS). paper | code
Moskovitz T*, Arbel M*, Huszar F, Gretton A (2021). Efficient Wasserstein Natural Gradients for Reinforcement Learning. International Conference on Learning Representations (ICLR). *Equal contribution. paper | code
Li WK, Moskovitz T, Kanagawa H, Sahani M (2020). Amortised Learning by Wake-Sleep. International Conference on Machine Learning (ICML). paper | code
Moskovitz T, Wang R, Lan J, Kapoor S, Yosinski J, Rawal A (2019). Learned First-Order Preconditioning. Beyond First Order Methods in ML Workshop, Neural Information Processing Systems. paper
Lindsay G, Moskovitz T, Yang G, Miller K (2019). Do Biologically-Plausible Architectures Produce Biologically-Realistic Models? Conference on Cognitive Computational Neuroscience. paper
Sun M, Li J, Moskovitz T, Lindsay G, Miller K, Dipoppa M, Yang G (2019). Understanding the Functional and Structural Differences Across Excitatory and Inhibitory Neurons. Conference on Cognitive Computational Neuroscience. paper
Moskovitz T, Litwin-Kumar A, Abbott LF (2018). Feedback alignment in deep convolutional networks. Pre-print. paper
Moskovitz T, Roy NA, Pillow JW (2018). A comparison of deep learning and linear-nonlinear cascade models to neural encoding. Preprint. paper
Hsu E, Fowler E, Staudt L, Greenberg M, Moskovitz T, Shattuck DW, Joshi SH (2016). DTI of corticospinal tracts pre- and post-physical therapy in children with cerebral palsy. Proceedings of the Organization of Human Brain Mapping. poster