Alexandra Maria Proca

Imperial College London, Department of Computing

prof_pic2.jpg

I’m a PhD student and a President’s scholar in the Department of Computing at Imperial College London, supervised by Pedro Mediano and Murray Shanahan. Broadly, my research interests span the fields of computational/theoretical neuroscience and deep learning theory. I’m interested in using deep learning models for developing general theories of learning and cognition to better understand both biological and artificial minds.

I’m particularly interested in understanding learning and cognition through neural computation: how neural populations learn and represent information, perform computations, and compose these processes for flexible behavior, and how computation emerges from underlying connectivity. Some related topics include learning dynamics, feature learning, representational geometry, recurrent dynamics, and modularity/compositionality/gating. My work is grounded in studying simple, interpretable models (such as linear networks and low-rank RNNs) using mathematics, statistical physics, and dynamical systems approaches. At the moment, I’m working on projects studying the learning dynamics of RNNs, feature superposition in RNNs, and cognitive flexibility in gated linear networks. If my work or any of these topics sound interesting to you, feel free to reach out!

Before attending Imperial, I received my bachelors degree in computer science and neuroscience (with a music minor) from the University of North Carolina at Chapel Hill and then completed a masters degree in machine learning at University College London. During my degrees, I worked as a research assistant in several labs on various topics in the fields of machine learning and neuroscience. After finishing my masters, I worked as a research assistant in the Department of Computer Science at ETH Zürich with João Sacramento, studying the use of hypernetworks for meta-learning. For more information, you can view my CV.

I really enjoy discussing and engaging with science and philosophy with other people. I’m currently involved with a journal club focusing on mechanistic interpretability at Imperial. Previously in Zurich, I helped organize Qualiaheads, a club of graduate students studying the state of research in consciousness science.

Outside of research, I love anything outdoors (marathon running, hiking, skiing, etc.). To date, I’ve run 5 marathons with a PR of 3:23 and a half marathon PR of 1:34. I also enjoy playing music and writing. I’ve been playing piano for 18 years and while I lived in Zürich, I was a singer in a local band. I occassionally write poetry and (less frequently) share it.

news

Feb 2026 Our paper on feature geometry in RNNs under memory demands was accepted as an Oral to ICLR.
Oct 2025 I’ll be doing a research visit in Srdjan Ostojic’s lab at ENS in Paris until July.
May 2025 Our paper on learning dynamics in linear RNNs was accepted as an Oral to ICML.
Mar 2025 I’ll be giving a talk at a COSYNE workshop on neural dynamics.
Jan 2025 Our paper on learning dynamics and feature learning was accepted to ICLR 2025.
Oct 2024 Our commentary paper on supporting NeuroAI trainees is now out in Nature Communications.
Sep 2024 Our paper on task abstractions in gated linear networks was accepted as a NeurIPs Spotlight!
Aug 2024 I’ll be presenting a poster on learning dynamics in linear RNNs at CCN in Boston.

selected publications

  1. Temporal superposition and feature geometry in RNNs under memory demands
    Pratyaksh Sharma*, Alexandra M. Proca*, Lucas Prieto, and Pedro A.M. Mediano
    ICLR Oral, 2026
  2. Learning dynamics in linear recurrent neural networks
    ICML Oral, 2025
  3. Flexible task abstractions emerge in linear networks with fast and bounded units
    NeurIPS Spotlight, 2024
  4. From Lazy to Rich: Exact Learning Dynamics in Deep Linear Networks
    ICLR, 2025