LLM-Training-Puzzles
Engage with 8 challenging puzzles focused on training large language models using multiple GPUs. This project provides practical exercises in memory efficiency and compute pipelining, crucial for current AI work. An ideal resource for exploring neural network training at scale without extensive infrastructure, accessible via Colab for convenience. This series builds on Sasha Rush's prior works to offer a thorough dive into AI training challenges.