mlp
Discover the Multi-layer Perceptron (MLP) for n-gram language modeling, inspired by the seminal 2003 work of Bengio et al. This project offers distinct C, numpy, and PyTorch implementations, with the latter utilizing PyTorch's Autograd for efficient gradient computation. The module yields better validation loss with fewer parameters, despite increased computational demands. Future works will focus on hyperparameter optimization and version consolidation.