Functionary: A Comprehensive Overview
Functionary is an advanced language model developed by MeetKai, engineered to interpret and execute functions or plugins efficiently. By determining the optimal execution method - whether in parallel or sequentially - and comprehending their outputs, Functionary ensures that functions are triggered only when necessary. The functions within Functionary are defined using JSON Schema Objects, analogous to OpenAI GPT function calls. This feature makes it highly adaptive and versatile in various applications.
Key Features
- Efficient Function Execution: Functionary autonomously decides when functions should be executed, optimizing processes based on necessity and priority.
- Intelligent Output Comprehension: The model not only executes functions but also understands their outputs, adding a layer of intelligent decision-making.
- JSON Schema Definition: Functions are defined using JSON Schema Objects, allowing for complex and customized function calls similar to those in OpenAI GPT.
- Parallel and Serial Execution: Based on the requirements, Functionary can run multiple functions simultaneously or in a sequence, enhancing performance efficiency.
Latest Developments
Functionary's development has been dynamic, with several models released to enhance functionality and user experience:
-
Functionary-Medium-v3.1: As of August 2024, this model ranks 2nd on the Berkeley Function-Calling Leaderboard, highlighting its superior performance in function execution. It features a 128k-context length and is based on Meta's original prompt template.
-
Functionary-Small-v3.2: Released in August 2024, this variant uses MeetKai's custom prompt template, offering improved performance over the earlier v3.1 model.
-
Grammar Sampling: A unique feature of Functionary, grammar sampling ensures 100% accuracy in generating function names and parameters by constraining the LLM’s generation to follow the prompt template. However, this is available for V2 and V3.0 models only.
Setup and Execution
Functionary's deployment is tailored for both small and medium models, each with specific requirements:
-
Small Model Setup: Designed for streamlined deployment, requiring minimal resources.
-
Medium Model Setup: This demands more robust infrastructure, suggesting configurations of 4xA6000 or 2xA100 80GB to support the model's capabilities.
Use Cases and Implementation
Implementing Functionary can be achieved through various methods, allowing for both OpenAI-compatible and raw usage:
-
OpenAI-Compatible Usage: Functionary supports integration with OpenAI's API, simplifying function calls and execution using familiar interfaces.
-
Raw Usage: For more control and flexibility, developers can integrate Functionary using direct HTTP requests, customizing interactions and outputs.
Model Selection
Functionary offers a range of models to suit different needs:
-
Functionary-Small-v3.2: Features a 128k context and utilizes MeetKai's prompt template, requiring 24GB VRAM for FP16.
-
Functionary-Medium-v3.1: Operates on a 128k context with original Meta's prompt template, needing 160GB VRAM for optimal performance.
Compatibility and Tools
Functionary models are designed with specific tool compatibility in mind, ensuring seamless integration and operation. Version 1 models are compatible with earlier OpenAI-python versions, while Version 2 models align with the latest developments.
Innovation and Comparison
Compared to similar projects, Functionary stands out with its support for parallel function calls, multi-turn dialogue processing, and the ability to refine results based on tool execution, making it a robust choice for diverse AI applications.
Functionary continues to innovate, aspiring to redefine how AI models function and interact with complex tools and environments, proving itself as an essential tool for modern data-driven strategies.