Welcome to LangCorn: Your Gateway to Efficient Language Model Deployment
LangCorn is an innovative API server designed to streamline the deployment of LangChain models and pipelines. By harnessing the power of FastAPI, LangCorn delivers a high-performance and efficient solution for developers working on language processing applications. This platform makes it easy to serve complex models with minimal hassle.
Key Features
LangCorn stands out with its robust feature set aimed at simplifying the deployment and operation of language models:
- Easy Deployment: Seamlessly deploy your LangChain models and pipelines with minimal effort.
- Auth Functionality: Out-of-the-box authentication capabilities ensure your deployments are secure.
- FastAPI Framework: Benefit from the responsiveness and capabilities of FastAPI, known for handling requests efficiently.
- Scalability: LangCorn supports scalable operations, making it suitable for projects of various sizes.
- Custom Pipelines: Flexibility to design and implement custom processing pipelines tailored to specific needs.
- RESTful API: Access well-documented endpoints that adhere to RESTful principles, facilitating integration.
- Asynchronous Processing: Enjoy faster response times thanks to asynchronous processing capabilities.
Installation Process
Getting started with LangCorn is straightforward. To install the LangCorn package, use pip:
pip install langcorn
Quick Setup Guide
Here’s a simple way to set up an example LangChain model:
-
Create a Chain in Python: Write a Python script to define your chain using libraries like LangChain and OpenAI.
-
Run the LangCorn Server: Execute your server to start serving your models:
langcorn server examples.ex1:chain
Alternatively, use the following command to achieve the same result:
python -m langcorn server examples.ex1:chain
-
Deploy Multiple Chains: If necessary, deploy several chains simultaneously to broaden your capabilities:
python -m langcorn server examples.ex1:chain examples.ex2:chain
FastAPI Application Integration
Integrate your LangCorn deployment with a FastAPI app by importing necessary packages and creating the app:
from fastapi import FastAPI
from langcorn import create_service
app:FastAPI = create_service("examples.ex1:chain")
For handling multiple chains:
from fastapi import FastAPI
from langcorn import create_service
app:FastAPI = create_service("examples.ex2:chain", "examples.ex1:chain")
Run your app with Uvicorn:
uvicorn main:app --host 0.0.0.0 --port 8000
Documentation and Support
LangCorn provides automatically generated FastAPI documentation, making it simple to explore and understand available endpoints. A live example of this documentation is hosted online, facilitating developers to see it in action.
Authentication and Security
Add an API token for authentication to secure API endpoints easily:
python langcorn server examples.ex1:chain examples.ex2:chain --auth_token=api-secret-value
Or specify within the application code:
app:FastAPI = create_service("examples.ex1:chain", auth_token="api-secret-value")
Advanced Features
LangCorn allows you to handle usage memories and override LLM parameters per request, ensuring your models behave according to specific requirements and preferences. Customize run functions to tailor data processing to suit unique queries and outputs.
Contribution and Licensing
LangCorn is an open-source project released under the MIT License. Contributions are welcome and encouraged. For those interested in contributing, start by forking the repository, making changes, and submitting a pull request. Make sure to review the contribution guidelines before getting involved.
LangCorn provides an accessible and powerful solution for deploying language models and pipelines with ease. Whether you are a developer looking to deploy a simple model or manage complex pipelines, LangCorn offers the tools you need to succeed.