Citta
Exploring computational neuroscience.
I built Citta at the end of 2024 after discovering Jack Gallant's semantic brain viewer, which provides a way to inspect cortical lexical-semantic conceptual maps at the group level, vertex-by-vertex. The data for this viewer was generated by pooling lexical semantic maps from 24 separate participants who listened to several hours of natural narrative stories. It accounts for roughly 80% of the variance in lexical semantic conceptual maps in any individual.1
This launched my interest in computational neuroscience. I was fascinated with the idea that with machine learning, statistical analysis, and a ton of brain data (fMRI), we can understand what the functions of different brain regions are. This led me to question: if different brain regions are responsible for processing different semantic information, how is this data actually stored within the human brain?
My instinct was that it was much more visual than linguistic. To explore this, I built Citta. Citta is an interactive brain viewer that enables users to explore semantic mappings of the brain through visual representations, generated by OpenAI's Dall-E 2 (recently updated to Google's Nano Banana Pro).
This led to my research at the Wu Tsai Neuroscience Institute, where I explored novel applications of MindEye2, a state-of-the-art software that can digitally reconstruct the images you conjure in your brain using only 1 hour of training data. This was a vast improvement from previous models, which needed 40+ hours of data.2
This also led me to my research at the Yale School of Medicine, where I explored the role of the Default Mode Network (DMN) in early Alzheimer's disease prediction. I was especially grateful to harness my experience with computer science, machine learning, and computational neuroscience towards making breakthroughs in Alzheimer's disease, given both of my grandparents are affected by the disease.