Overview of LangChain Visualizer
LangChain Visualizer is a specialized tool designed to enhance user experiences when interacting with LangChain applications. By adapting Ought's ICE visualizer, it provides a rich and insightful user interface, allowing users to view and understand LangChain interactions clearly and vividly. This tool is particularly useful for developers and teams using LangChain, who benefit from a more transparent view of their application's behaviors and data flows.
Key Features
-
Prompt Visibility: Every interaction sent to the Language Learning Model (LLM) can be viewed in full detail. Users can see the exact prompt text that drives each action.
-
Prompt Highlighting: The interface uses color coding to distinguish between static text and dynamic, template-based substitutions within each prompt.
-
Execution Flow Analysis: Users can inspect how functions unfold and track when each goes up the stack during execution, offering a clearer view of the process and any issues.
-
Cost Monitoring: When using models like OpenAI's
text-davinci-003
, users can view the cost breakdown for each LLM call and the entire session, making it easier to manage resources.
How to Get Started
Installation
To start using LangChain Visualizer, simply install the package using the command:
pip install langchain-visualizer
For users on Linux, an additional installation of libyaml
might be necessary:
apt install -y libyaml-dev
Basic Setup
-
In your Python entry point file, import the visualizer module at the very beginning:
import langchain_visualizer
-
Create an asynchronous function for the workflow that you wish to visualize.
-
Use
langchain_visualizer.visualize
to apply visualization to this function.
Example Usage
To replicate the execution seen in the demonstration screenshot, follow these steps:
-
Install the required libraries and optional dependencies:
pip install langchain-visualizer google-search-results openai
-
Set up your OpenAI and SERP API keys, or alternatively, replay recorded interactions using the following commands:
$ pip install vcr-langchain $ OPENAI_API_KEY=dummy python tests/agents/test_langchain_getting_started.py
-
For users with operational APIs, execute this Python script:
import langchain_visualizer import asyncio from langchain.agents import initialize_agent, load_tools from langchain.llms import OpenAI llm = OpenAI(temperature=0.7) tools = load_tools(["serpapi", "llm-math"], llm=llm) agent = initialize_agent(tools, llm, agent="zero-shot-react-description", verbose=True) async def search_agent_demo(): return agent.run( "Who is Olivia Wilde's boyfriend? What is his current age raised to the 0.23 " "power?" ) langchain_visualizer.visualize(search_agent_demo)
Upon execution, a new browser window lets you view the agent's operations in real-time.
Support for Jupyter Notebooks
The visualizer is also compatible with Jupyter notebooks. To use it within this environment, import the visualization function specifically designed for Jupyter:
from langchain_visualizer.jupyter import visualize
For practical demonstrations, users can refer to the demo notebook available in the test directories.
Extended Features: Visualizing Embeddings
LangChain Visualizer allows users to visualize document chunking for embeddings. To activate this feature, use the visualize_embeddings
function in conjunction with the main chain visualization:
from langchain_visualizer import visualize, visualize_embeddings
async def run_chain():
...
visualize_embeddings()
visualize(run_chain)
Comparison with LangChain's Built-in Tracer
Some users may wonder why they should choose LangChain Visualizer over the built-in tracer provided by LangChain. Here are several reasons:
-
User Interface Preference: The ICE UI features colored highlights for variable parts of prompts, maintain static visualization for cached LLM calls, and provide better inspection capabilities without leaving the trace page.
-
Detail-Oriented Execution Visualization: This tool offers detailed insights into when specific tools or techniques, like PythonREPL, are executed, providing more granular data compared to high-level chain execution summaries.
LangChain's built-in tracer, on the other hand, enjoys broader support and may include features not yet integrated into LangChain Visualizer. Users needing specific functionalities are encouraged to contribute through pull requests or by reporting issues.
Additional Resources
For those interested in exploring further, another valuable project is VCR LangChain, which allows developers to record LLM interactions for both testing and demonstration purposes.