#tokenization

Logo of code2prompt
code2prompt
code2prompt facilitates the conversion of codebases into detailed LLM prompts with features such as Handlebars template-based prompt customization, token counting, and .gitignore adherence. Users can filter prompts with glob patterns and optionally include Git diffs. The tool supports multiple tokenizers and simplifies saving outputs. With a focus on accessibility, it enhances template reusability for code documentation, bug-fixing, and performance upgrades. It includes straightforward installation options like binary downloads, source builds, AUR, or Nix packages.
Logo of spacy-stanza
spacy-stanza
The spacy-stanza package combines Stanza's, formerly StanfordNLP, models with spaCy, allowing integration of high-accuracy models for tasks like tokenization, POS tagging, and lemmatization across 68 languages. It supports advanced NLP tasks including named entity recognition using Stanza's sophisticated algorithms. Ideal for developers looking to leverage the strengths of both SpaCy and Stanza, it provides customizable options within SpaCy's pipeline and supports user-defined components. Compatible with SpaCy v3.0 and above for optimal performance.
Logo of ngram
ngram
This article provides an in-depth look at n-gram language modeling and its implementation in Python and C. It covers key machine learning aspects such as training, evaluation, and hyperparameter adjustment, alongside tokenization and next token prediction in autoregressive models. Using a names dataset from ssa.gov, it offers a practical guide to model training, validation, and new name generation. It also compares Python and C implementations, offering insights into perplexity and sampling efficiency, making it ideal for those interested in the computational operations of language models.
Logo of tokenizers
tokenizers
Utilize the high-performance Rust-based tokenizers for efficient text processing in research and production environments. Supporting functionalities like normalization with token tracking and pre-processing steps such as truncation, padding, and special token additions, this toolkit is compatible with Python, Node.js, and Ruby, among other languages. Easily customize and train tokenizers with minimal coding efforts. Explore the comprehensive documentation and quick start guides for in-depth understanding.
Logo of llama3-from-scratch
llama3-from-scratch
Discover a comprehensive guide to implementing Llama3 from scratch using direct tensor and matrix operations. This article explains how to load model weights provided by Meta, use tiktoken for tokenization, and delve into embedding normalization and self-attention mechanics. Gain insights into configuring the transformer model that features 32 layers and multi-head attention, facilitating an understanding of neural network dynamics without heavy reliance on built-in neural modules.
Logo of fugashi
fugashi
Fugashi is a Cython interface to MeCab, providing efficient Japanese text tokenization and morphological analysis. It simplifies installation with support across major platforms such as Linux, OSX, and Windows. While it primarily uses UniDic, Fugashi also supports other dictionaries, offering flexibility for various text processing needs. Resources like interactive demos and guides enhance user understanding of tokenization. For those seeking alternatives, SudachiPy offers another option without requiring MeCab installation. Fugashi's role in research is notable, with users encouraged to cite it in academic works.
Logo of datablations
datablations
Discover strategies for scaling language models in data-limited contexts. This repository includes experiments on data repetition and computational budgets, working with up to 900 billion tokens and models with 9 billion parameters. It offers a scaling law for computational efficiency, considering the decreasing utility of repeated tokens and excess parameters. Methods to address data limitations, such as code augmentation and filtering techniques including perplexity and deduplication, are explained. Access to over 400 training models and datasets is provided, supporting robust language model development in constrained environments.
Logo of vibrato
vibrato
Vibrato ensures swift tokenization using a Viterbi algorithm-based approach with benefits over other tools in language processing. Developed in Rust, it optimizes MeCab's processes, offering unique features like efficient cache id mappings and compatibility with custom dictionaries. Vibro provides seamless integration across platforms, including a Python wrapper, and supports flexible training parameters to cater to specific linguistic needs, making it indispensable for applications requiring rapid computations.
Logo of parseltongue
parseltongue
Explore a versatile browser extension for efficient text conversion and real-time tokenization visualization, supporting formats like leetspeak, binary, and base64. Suitable for developers, linguists, and everyday users, this tool enhances text manipulation effortlessly. Fully compatible with Firefox and Chrome, it offers user-friendly features such as popup UI and context menu integration. Real-time visualization with colored tokens provides valuable insights for both developers and general users. Easy installation and a comprehensive roadmap encourage active contributions and innovative development.
Logo of gpt3-tokenizer
gpt3-tokenizer
Discover a flexible TypeScript tokenizer for OpenAI's GPT-3 and Codex, suitable for both NodeJS and browser usage. Utilizing OpenAI's dictionary, it delivers precise tokenization comparable to OpenAI Playground, with performance gains from the Map API. Follow an easy installation and usage guide for smooth integration.
Logo of sacremoses
sacremoses
Sacremoses provides a versatile text processing solution for Python 3, incorporating functionalities like tokenization, detokenization, truecasing, and normalization. Its command-line interface facilitates handling extensive text corpora with customizable language settings, including aggressive dash splitting, XML character processing, and multi-processing support. Its robust features, like truecasing model training and streamlined pipeline operations, make it an indispensable tool for text manipulation in both academic and practical domains.