Introduction to the Fortuna Project
Overview
Fortuna is a cutting-edge library designed for uncertainty quantification, crucial for applications involving critical decision-making processes. By accurately estimating predictive uncertainty, users can evaluate the reliability of model predictions, determine the necessity of human intervention, or establish whether a model is fit for deployment in real-world scenarios. Fortuna simplifies the process for users to implement benchmarks and integrate uncertainty into production systems.
Features
Fortuna stands out for its capacity to work with pre-trained models from any framework, offering calibration and conformal prediction methods. Additionally, it provides several Bayesian inference methods compatible with deep learning models developed using Flax. The library is intuitive and highly configurable, making it accessible even for professionals new to uncertainty quantification.
Modes of Usage
Fortuna offers three versatile usage modes tailored to different user needs:
-
From Uncertainty Estimates: This mode has minimal compatibility requirements, making it the quickest way to interact with the library. It provides conformal prediction methods for both classification and regression, utilizing uncertainty estimates as input to deliver rigorous prediction sets.
-
From Model Outputs: In this mode, users begin with trained model outputs in
numpy.ndarray
format. It allows users to calibrate model outputs, estimate uncertainty, compute metrics, and generate conformal sets. It offers more control over uncertainty estimates compared to the previous mode. -
From Flax Models: Tailored for models written in Flax, this mode offers the highest level of compatibility but requires Flax deep learning models. It enhances the quantification of predictive uncertainty through scalable Bayesian inference procedures, significantly improving upon traditional model training methods.
Installation
Fortuna requires JAX to be installed in a virtual environment. Users can install Fortuna using pip
, or by building the package via Poetry
. Additional dependencies can be added for enhanced functionalities, such as integrating with Hugging Face models or running on Amazon SageMaker.
Examples and Resources
The Fortuna GitHub repository contains numerous examples to help users get started. Additionally, it offers seamless integration with Amazon SageMaker, enabling users to deploy models with ease. A step-by-step guide is available for setting up the necessary AWS account and infrastructure.
Additional Materials and Citations
Those interested in learning more about Fortuna can refer to various resources, including an AWS launch blog post and an academic paper available on arXiv. For academic recognition, there is a provided citation format.
Contribution and Licensing
Fortuna is open to contributions from the community. Potential contributors can refer to the contribution guidelines on GitHub. The project is licensed under the Apache-2.0 License, allowing for broad usage while ensuring compliance with legal norms.
In summary, Fortuna offers a robust suite of tools for effectively managing predictive uncertainty in AI models, accommodating a wide range of platforms and frameworks. Its flexibility makes it a valuable resource for developers aiming to enhance their model’s reliability and confidence.