Ted Moskovitz

Logo

graduate student studying machine learning and neuroscience

View My GitHub Profile

About Me

Summer 2022: I’m currently interning at DeepMind working with Tom Zahavy as part of the Discovery Team.

I’m an incoming fourth year PhD student studying machine learning and theoretical neuroscience at the Gatsby Computational Neuroscience Unit, where I’m fortunate to be advised by Maneesh Sahani and Matt Botvinick. Broadly speaking, I’m primarily interested in the complex interrelationship between biological and artificial intelligence and in leveraging the algorithms used by the brain to improve machine learning, particularly reinforcement learning. Right now, I’m focused on multitask and meta-RL with a particular focus on enabling agents to identify and exploit shared structure among tasks. I’m also excited by optimization theory and its applications to RL.

Before arriving at Gatsby, I worked with the Horizons and Deep Collective teams at Uber AI Labs on optimization techniques for deep learning. Previously, I was a Masters student in computer science at Columbia, where I worked with Larry Abbott, Ashok Litwin-Kumar, and Ken Miller at the Center for Theoretical Neuroscience on biologically plausible learning rules and architectures for deep learning. I did my undergrad at Princeton, where I majored in neuroscience with minors in computer science and linguistics, and was advised by Jonathan Pillow.

Google Scholar CV Twitter

Email: ted@gatsby.ucl.ac.uk

Publications / Original Work

Moskovitz T, Kao T, Sahani M, Botvinick M (2022). Minimum Description Length Control. Preprint. paper

Moskovitz T, Arbel M, Parker-Holder J, Pacchiano A (2022). Towards an Understanding of Default Policies in Multitask Policy Optimization. Conference on Artificial Intelligence and Statistics (AISTATS). paper (Best Paper Honorable Mention)

Moskovitz T, Wilson SR, Sahani M (2022). A First-Occupancy Representation for Reinforcement Learning. International Conference on Learning Representations (ICLR). paper code

Moskovitz T, Parker-Holder J, Pacchiano A, Arbel M, Jordan MI (2021). Tactical Optimism and Pessimism for Deep Reinforcement Learning. Neural Information Processing Systems (NeurIPS). To appear. paper code

Moskovitz T, Arbel M, Huszar F, Gretton A (2021). Efficient Wasserstein Natural Gradients for Reinforcement Learning. International Conference on Learning Representations (ICLR). paper code

Li WK, Moskovitz T, Kanagawa H, Sahani M (2020). Amortised Learning by Wake-Sleep. International Conference on Machine Learning (ICML). paper code

Moskovitz T, Wang R, Lan J, Kapoor S, Yosinski J, Rawal A (2019). Learned First-Order Preconditioning. Beyond First Order Methods in ML Workshop, Neural Information Processing Systems. paper

Lindsay G, Moskovitz T, Yang G, Miller K (2019). Do Biologically-Plausible Architectures Produce Biologically-Realistic Models? Conference on Cognitive Computational Neuroscience. paper

Sun M, Li J, Moskovitz T, Lindsay G, Miller K, Dipoppa M, Yang G (2019). Understanding the Functional and Structural Differences Across Excitatory and Inhibitory Neurons. Conference on Cognitive Computational Neuroscience. paper

Moskovitz T, Litwin-Kumar A, Abbott LF (2018). Feedback alignment in deep convolutional networks. Pre-print. paper

Moskovitz T, Roy NA, Pillow JW (2018). A comparison of deep learning and linear-nonlinear cascade models to neural encoding. Preprint. paper

Hsu E, Fowler E, Staudt L, Greenberg M, Moskovitz T, Shattuck DW, Joshi SH (2016). DTI of corticospinal tracts pre- and post-physical therapy in children with cerebral palsy. Proceedings of the Organization of Human Brain Mapping. poster

Course Projects and Theses

Moskovitz T, Krone J, Brand R (2018). Toward Improved Meta-Imitation Learning, Final Project for Humanoid Robotics course at Columbia. link

Moskovitz T (2018). Assessing the Resistance of Biologically-Inspired Neural Networks to Adversarial Attack, Final Project for Security & Robustness of ML Systems course at Columbia. link

Moskovitz T (2017). Deep Transfer Learning for Language Generation from Limited Corpora, Final Project for Advanced Topics in Deep Learning course at Columbia. link

Moskovitz T (2017). Deep Learning Models for Neural Encoding in the Early Visual System, Princeton Senior Thesis. link

Notes

Theoretical Neuroscience Course Guide