Introduction to Local LLM Function Calling
The local-llm-function-calling
project is an innovative tool designed to control and constrain the output from Hugging Face text generation models. It achieves this by enforcing a JSON schema, thereby ensuring that the generated text aligns with predefined formats and data structures. The project offers functionality similar to OpenAI's function calling feature, but with a key distinction: it actually enforces the schema strictly, unlike OpenAI which may not.
Key Features
The primary features of the local-llm-function-calling
project include:
- JSON Schema Enforcement: The tool constrains text generation to strictly follow a specified JSON schema, allowing for precise control over the output.
- Prompt Formulation for Function Calls: It facilitates the creation of prompts specifically for function calls, making it easy to extract and format data accurately.
- User-Friendly Interface: Through the
Generator
class, users can interact with the text generation process with ease.
Installation
Installing the local-llm-function-calling
library is straightforward. Simply run the following command in your terminal:
pip install local-llm-function-calling
How It Works
To use the local-llm-function-calling
, follow these steps:
-
Define Functions and Models: The user begins by defining the functions and specifying the models to be used.
functions = [ { "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA", "maxLength": 20, }, "unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}, }, "required": ["location"], }, } ]
-
Initialize the Generator: Using the defined functions and a Hugging Face model, initialize the text generator.
generator = Generator.hf(functions, "gpt2")
-
Generate Text: With a suitable prompt, generate the desired text.
function_call = generator.generate("What is the weather like today in Brooklyn?") print(function_call)
Custom Constraints
The project allows users to define their own constraints and prompts beyond the default methods. This means users can tailor the text generation to meet specific needs by implementing custom constraints:
from local_llm_function_calling import Constrainer
from local_llm_function_calling.model.huggingface import HuggingfaceModel
def lowercase_sentence_constraint(text: str):
return [text.islower(), text.endswith(".")]
constrainer = Constrainer(HuggingfaceModel("gpt2"))
generated = constrainer.generate("Prefix.\n", lowercase_sentence_constraint, max_len=10)
Extending Functionality
For those who wish to customize or extend the prompt generation process, the project offers the ability to subclass the TextPrompter
class. This flexibility allows users to adapt the tool to better fit specific application contexts or requirements.
With local-llm-function-calling
, users gain powerful and flexible control over text generation, ensuring that the outputs are both structured and reliable, in line with the specified JSON schemas.