Introduction to Magentic
Magentic is a powerful tool designed to seamlessly integrate Large Language Models (LLMs) into your Python codebase. By utilizing simple decorators like @prompt
and @chatprompt
, it allows developers to create functions capable of generating and returning structured outputs from LLMs, enabling complex logic by combining LLM queries with standard Python code.
Key Features
Structured Outputs
Magentic utilizes pydantic
models and Python's native types to ensure that the outputs from the LLMs are structured and easy to work with.
Chat Prompting
The @chatprompt
feature lets you provide structured examples for few-shot learning which can guide LLMs in generating more accurate responses.
Function Calling
Magentic enables LLM-powered functions to initiate one or more function calls directly from their prompts, managed through FunctionCall
objects.
Formatting and Asyncio
Incorporate Python objects into prompts naturally and leverage asyncio
for asynchronous operations to enhance performance when querying LLMs.
Streaming
Allows for the continuous flow of data, meaning that structured outputs can be processed as they are being generated rather than waiting for the entire output. This is particularly useful for processing large datasets or when time-sensitive results are needed.
Vision
Extract structured data from images using LLMs, opening up capabilities for tasks such as image classification or annotation.
Retry Mechanisms
Improve the reliability of LLMs in following complex output schemas through assisted retries.
Broad LLM Provider Support
Magentic supports multiple LLM providers like OpenAI and Anthropic, allowing developers to configure according to their specific needs.
Type Annotations
Enhance your development workflow by integrating smoothly with linters and IDEs, thanks to comprehensive type annotations.
Installation
Magentic can be easily installed using pip:
pip install magentic
Alternatively, for those who prefer poetry
, installation is also straightforward:
poetry add magentic
To use specific LLM providers like OpenAI, you need to configure your environment variables accordingly.
Usage Guide
Basic Usage with @prompt
To create a function that enhances text with a LLM, simply use the @prompt
decorator:
from magentic import prompt
@prompt('Add more "dude"ness to: {phrase}')
def dudeify(phrase: str) -> str: ...
dudeify("Hello, how are you?")
In this example, dudeify
transforms a standard greeting into a more laid-back, friendly version.
Advanced Usage with Chatprompt
For scenarios involving multiple, structured message interactions, use @chatprompt
:
from magentic import chatprompt, AssistantMessage, SystemMessage, UserMessage
from pydantic import BaseModel
class Quote(BaseModel):
quote: str
character: str
@chatprompt(
SystemMessage("You are a movie buff."),
UserMessage("What is your favorite quote from Harry Potter?"),
AssistantMessage(
Quote(
quote="It does not do to dwell on dreams and forget to live.",
character="Albus Dumbledore",
)
),
UserMessage("What is your favorite quote from {movie}?"),
)
def get_movie_quote(movie: str) -> Quote: ...
get_movie_quote("Iron Man")
Here, the function demonstrates using a conversation-based model interaction to fetch well-crafted quotes from movies.
Function Calls and Chains
LLMs can invoke specific functions to enrich responses. This is facilitated through FunctionCall
and @prompt_chain
to handle complex requests requiring multiple steps:
from magentic import prompt_chain
def get_current_weather(location, unit="fahrenheit"):
return {"location": location, "temperature": "72", "unit": unit, "forecast": ["sunny", "windy"]}
@prompt_chain(
"What's the weather like in {city}?",
functions=[get_current_weather],
)
def describe_weather(city: str) -> str: ...
describe_weather("Boston")
In this case, the LLM first retrieves weather data before crafting a human-readable weather summary.
Streaming and Async Functions
Utilize asynchronous functions to speed up tasks and efficiently manage resources when generating content concurrently. For instance, generating descriptions for several countries simultaneously drastically reduces wait time compared to sequential processing.
from time import time
countries = ["Australia", "Brazil", "Chile"]
start_time = time()
streamed_strs = [describe_country(country) for country in countries]
for country, streamed_str in zip(countries, streamed_strs):
description = str(streamed_str)
print(f"{time() - start_time:.2f}s : {country} - {len(description)} chars")
Backend and LLM Configuration
Magentic is versatile, supporting several backends. You can choose your desired model and configure it through environment variables or API settings for deeper control.
openai
for general LLM usage.anthropic
for broad feature support, offering a comprehensive suite for LLM interactions.litellm
andmistral
for specialized needs with varying feature sets and backend configurations.
Each backend offers unique advantages, and Magentic provides the flexibility to select the one that best fits your project requirements.
Conclusion
Magentic provides an elegant and powerful solution for integrating complex and dynamic LLM capabilities directly into Python applications. Whether your needs are simple or advanced, Magentic's seamless integration empowers developers to create robust and intelligent applications with minimal overhead.