I'm an intern on the MLSys team at OctoML, a startup that's automating end-to-end optimizations of deep learning (DL) models using the Apache TVM DL Compiler stack. Currently, I'm working on optimizing the DL training loop. Previously, I was a research assistant in the UW SAMPL research group, working on programming languages (PL) and machine learning (ML) systems research. I received my B.S. in Computer Science from the Allen School at UW in June 2020, under the supervision of Zachary Tatlock.
|Jan. 2021:||"Dynamic Tensor Rematerialization" accepted to ICLR 2021 with a spotlight presentation!|
|Dec. 2020:||applied to CS Ph.D. programs.|
|Sep. 2020:||started as an MLSys intern at OctoML.|
Broadly speaking, my research lies in (and around) the intersection of programming languages, machine learning, and systems. I enjoy synthesizing new techniques using ideas from each area, with an emphasis on improving large systems. I am particularly interested in building the next generation of compiler stacks, where data-driven methods play a central role in the optimization and code generation process. I believe that properly designed symbolic methods (e.g., rewriting systems, type-guided synthesis, CEGIS) can benefit greatly from incorporating cost models and heuristics trained from the large corpus of software that exists today.
Publications and Preprints
 Dynamic Tensor Rematerialization
Marisa Kirisame,† Steven Lyubomirsky,† Altan Haan,† Jennifer Brennan, Mike He, Jared Roesch, Tianqi Chen, Zachary Tatlock.
ICLR 2021 spotlight.
 Simulating Dynamic Tensor Rematerialization*
Altan Haan, supervised by Zachary Tatlock.
Honors thesis, 2020.
DTR. Dynamic Tensor Rematerialization (DTR) is a dynamic runtime technique for reducing peak memory requirements when training deep learning models. DTR is a "checkpointing" method which frees and recomputes intermediate computations as needed, thus trading more compute for less space. Unlike existing checkpointing methods which require offline planning, DTR is an online algorithm and operates fully in the runtime, enabling checkpointing for arbitrarily dynamic models. Notably, DTR is able to produce near-optimal checkpointing schedules when compared against Checkmate, a state-of-the-art static technique that uses ILP. Check out our preprint for more details.
TVM/Relay. Relay is the high-level differentiable IR used internally by the TVM DL compiler. With an ML-like syntax, Relay allows users to write differentiable programs with control flow, as a generalization of static computation graphs. I have contributed a few gradient implementations for tensor operators in Relay, which are required for automatic differentiation (AD) and backpropagation. I've also helped maintain and improve the AD code in Relay, as we push towards a fully functioning Relay training loop. Leveraging the end-to-end optimizations of TVM for training should bring great performance improvements over current more ad-hoc approaches.
Program synthesis. Forthcoming.
|Records:||my current collection and wishlist.|
|Listening:||tracking music that I listen to along with some brief commentary.|
This page was generated on Mon Jan 25 21:06:27 2021.