My research focuses on creating machine learning tools to assist with time domain astronomy studies, including automating critical parts of real-time discovery and classification workflows for supernovae.
BTSbot is designed to support the Bright Transient Survey (BTS), a program that aims to spectroscopically classify all bright extragalactic transients found in ZTF data. The model uses image cutouts and metadata together to automatically trigger a request for spectroscopic follow-up for new transients it discovers. BTSbot has discovered thousands of transients and contributed to the world's first fully automated detection, identification, spectroscopic classification, and public reporting of a supernova (see Press Release).
I've also run experiments benchmarking vision model architectures to improve how time-domain astronomers create vision models for alert streams. I found that adopting modern practices from the computer vision field yields better model performance at lower computational cost and makes models significantly easier to design.
I've repurposed my BTSbot model to also assist with rapid identification and target-of-opportunity (ToO) observations follow-up of young, nearby supernovae. Automation here allows me to probe the earliest phases of supernovae, which are very difficult to access with traditional workflows. I've used BTSbot to fully automatically trigger ToO observations to the SED Machine, the SOAR Telescope, LCO telescopes, and the Swift space telescope. BTSbot enabled space-based Swift follow-up just 6 minutes after its discovery by BTSbot. I use these observations to constrain supernova physics like mass loss rates and progentior properties.
I also wrote a Nature Astronomy Perspectives article about how the time-domain astronomy community can maximize the return from our large surveys by automating our supernova discovery and follow-up workflows.
StarEmbed is a benchmark for testing time series foundation models (TSFMs) on variable stars light curves. We aim to improve and accelerate variable star analysis by measuring the performance of various TSFMs on astrophysically-motivated downstream tasks like clustering, classification, and anomaly detection in a reproducible and transparent benchmark.
Publication in prep.
RBbot is a computer vision model which classifies alerts from the new LS4 survey as real or bogus. In training RBbot, I benchmarked various domain adaptation techniques to identify how to most effectively make use of real/bogus labels from previous surveys.
Publication in prep.
As part of my undergraduate thesis at the University of Michigan, I wrote a modeling code called NIMBLE which deconvolves observational effects from the underlying velocity and density profiles of halo stars with a novel MCMC-based routine to estimate the cumulative mass profile of the Milky Way.
We are now applying this code to real data from DESI and Gaia to map the dark matter distribution in the Milky Way. Figure 10 and Appendix C of Medina et al. (2025) show estimates of the Milky Way's cumulative mass profile from NIMBLE using DESI and Gaia data.