wikipedia2vec
Wikipedia2Vec is a versatile tool for creating embeddings of words and entities using data from Wikipedia, developed by Studio Ousia. It enables concurrent learning of embeddings with a straightforward command line interface. Employing the skip-gram model, it positions similar entities in proximity within a vector space. Supporting 12 languages, it is effectively utilized in various tasks such as entity linking, named entity recognition, and text classification. Extensive documentation and pretrained models are available for broader applications.