Ax: Building LLM-Powered Agents with TypeScript
Ax is a robust framework designed to build agents that can leverage the power of Large Language Models (LLMs). It's built in TypeScript and aims to streamline the process of integrating these models into production-level applications. Here’s a closer look at what Ax offers and why it’s beneficial for developers seeking efficiency and scalability in their AI projects.
Focus on Agents
Ax emphasizes the development of intelligent agents—programs that can perform specific tasks autonomously. With Ax, users can rapidly create powerful workflows that are both production-ready and seamlessly integrated with LLMs.
Key Features of Ax
- Versatile Support: Ax works with a wide range of LLMs and vector databases, ensuring flexibility and adaptability in various applications.
- Automated Prompt Generation: Developers can enjoy automated generation of prompts through simple signatures, which makes building and operating agents straightforward.
- Ability to Call Other Agents: Ax allows agents to communicate and cooperate with one another, enhancing their functionality.
- Document Conversion and RAG: The framework supports converting documents of any format to text and includes features like Retrieval-Augmented Generation (RAG), smart chunking, and embedding for querying purposes.
- Advanced Validation and Optimization: Ax validates outputs in real-time to enhance efficiency, supported by multi-modal DSPy and automatic prompt tuning.
- OpenTelemetry and Observability: It offers comprehensive tracing and observability capabilities using OpenTelemetry, which supports the
gen_ai
namespace. - Zero Dependencies: Ax is lightweight and doesn’t rely on external dependencies, making installations simpler and applications leaner.
Types and Prompts in Ax
Ax introduces the concept of prompt signatures—a structured way to generate safe and efficient prompts. These consist of descriptive input fields and predicted output fields. Developers can define multiple data types for accuracy, such as strings, numbers, booleans, dates, and even JSON.
Supported LLMs and Vector Databases
Ax supports numerous LLM providers, including OpenAI, Azure OpenAI, Anthropic, and Google Gemini, achieving 100% compatibility with models like GPT and Cohere. For vector databases, Ax simplifies interactions with popular services like In-Memory, Weaviate, and Pinecone. It also provides its own in-memory vector database for quick development needs.
Practical Examples
Ax comes with practical examples and built-in functions to help developers hit the ground running. These include:
- Summarizing Text: Using a chain-of-thought approach to condense text into a concise summary.
- Agent Development: Building agents that handle complex tasks by working with other agents.
- Docker and Embeddings: Enabling secure execution of commands and seamless integration of embeddings into applications.
- Streaming and Validation: Improving performance by validating output fields while streaming, reducing latency and cost.
- Fast Routing: Efficiently routing requests without the need for extensive LLM calls, through smart embedding queries.
Integration and Tuning
Ax offers integration with the Vercel AI SDK, allowing seamless application of AI tasks within a managed environment. Furthermore, developers can tune prompts using optimizers like AxBootstrapFewShot to refine prompt efficiency—especially beneficial when working with datasets like HotPotQA.
The Goal of Ax
Ax aims to simplify the integration of LLMs into applications, packaging all necessary tools for model management, prompt generation, error correction, and more, into a single framework. The ultimate goal is to empower developers to harness the full potential of LLMs with minimal overhead.
By using Ax, developers can heighten their productivity, reduce complexity, and create intelligent, responsive agents powered by the latest advancements in large language models.