I run Dynamical Systems, a research lab focused on open-ended scientific discovery. We build the environments, evaluations, and verification systems that make scientific work trainable.

Previously, I co-founded Arc Intelligence, where we worked on continual learning for production agents. Our core framework, ATLAS, turned deployed agent trajectories into feedback for inference-time adaptation and on-policy distillation. Before that, I built RL environments and distributed training infrastructure at NEAR, and helped grow open-source developer ecosystems at Protocol Labs.

I've come to believe that the best coaches are great teachers, and the best teachers are great learners. I built a career in machine learning by asking questions. It started as a football coach at Ohio State, Clemson, and the Los Angeles Rams, analyzing athlete behavior patterns and teaching players to excel in high-performance environments. This led me to a different arena as an early-stage investor at Emerson Collective, an Assistant Professor at NYU, and a doctoral student at UIUC, where I studied how to keep learners at the edge of learnability. How do you construct environments where difficulty produces growth rather than frustration?

The science of learning is often counter-intuitive. Inverting discovery to be open-ended means shifting from goal-oriented, hypothesis-driven methodologies to approaches that prioritize curiosity, exploration, and the continuous generation of new problems. That shift is what led me to start Dynamical. The learners are models and the environments are RL training runs, but the central design problem is the same.

I contribute to inference and training infrastructure in the open source ML stack: inference (SGLang), training (Slime, ATLAS). Primary stack: Python, Rust, PyTorch, Ray, SGLang.