References
Here you'll find people familiar with my research. To respect their time and ensure proper context, please write me first, do not reach out to them directly from here if you have not contected them before.
Erin Robertson, Program Lead at LASR. Erin evaluated my research agenda and technical background as part of the London safety community's mentoring and selection processes. She can provide a detailed assessment of my research potential and my fit within the UK AI safety ecosystem.
Bryce Meyer, Core Contributor at TransformerLens and Poseidon Research. I have interacted with Bryce regarding my technical contributions to TransformerLens and my work in mechanistic interpretability. He is familiar with my engineering skills and can verify my ability to work with model internals and representations.
Ben Sams, Research Scientist at AISI. Ben can evaluate the analytical depth of my work and its relevance to the current challenges in evaluating frontier AI systems.
Gurkenglas, Independent safety researcher. Gurkenglas has closely reviewed my research on unlearning methods and my approach to strategic challenges in AI safety, and can offer insights into my capabilities as an independent researcher.
Mikhail Seleznyov, Research Scientist at AIRI. We engaged in in-depth discussions exploring the fundamental rationale and effective approaches of unlearning methods for AI alignment. Mikhail can attest to my conviction, and ability to articulate and robustly defend my research perspective.