Reasoning on Graphs (RoG)
Reasoning on Graphs (RoG) is an innovative approach that combines large language models (LLMs) with knowledge graphs (KGs) to enhance reasoning capabilities. This project presents a unique planning-retrieval-reasoning framework that aims to produce reasoning results that are not only accurate but also interpretable.
Overview
At its core, the RoG methodology involves a few critical steps:
-
Planning: RoG starts by generating relation paths that are grounded in knowledge graphs. These relation paths form the basis for what is referred to as "faithful plans."
-
Retrieval: Using these plans, RoG retrieves valid reasoning paths from the KGs. This step ensures the information collected is reliable and serves the reasoning process effectively.
-
Reasoning: Finally, the framework applies these reasoning paths to enable LLMs to conduct reasoned analysis, producing results that are both explainable and faithful to the original data.
This three-step process ensures that the conclusions drawn are not only grounded in strong data foundations but are also easy to understand and interpret.
Key Components and Features
Latest Work
The project encourages exploration of its subsequent developments like the Graph-constrained Reasoning, which can be found here.
Requirements and Setup
To start using RoG, users need to install the necessary packages using the command pip install -r requirements.txt
. The pre-trained weights and datasets required for running the project are hosted on Hugging Face, a popular platform in the machine learning community, ensuring ease of access and integration.
Datasets
RoG provides specific datasets such as RoG-WebQSP and RoG-CWQ. Subgraph extraction from these datasets is essential for the reasoning process.
How to Run Inference
Running inference through RoG involves several steps:
-
Step 1: Planning requires generating relation paths, which can be automated using provided scripts.
-
Step 2: Reasoning involves generating answers by employing the developed paths.
There are options for plug-and-play reasoning that accommodate various LLMs like OpenAI's GPT-3.5 or Google's Flan-T5, emphasizing RoG's versatility.
Training
Training the RoG system involves downloading and preparing the necessary datasets. It requires significant computational resources, preferably two A100-80GB GPUs. The training scripts are designed to simplify the process, creating datasets for relation path pairs, joint-training, and interpretable examples.
Results and Interpretability
RoG aims to produce not only correct but interpretable results, allowing users to understand the reasoning path taken by the AI. Several experimental results and analyses are provided to showcase the performance and application of RoG.
Conclusion
Reasoning on Graphs marks a significant step forward in AI model interpretability and reasoning. By grounding AI decisions in well-structured knowledge graphs, RoG not only improves accuracy but also embraces transparency in decision-making. This project lays an essential framework for future developments in AI reasoning systems, ensuring they remain grounded and comprehensible.
For any inquiries or to cite this project in academic work, consider referring to their official publication:
@inproceedings{luo2024rog,
title={Reasoning on Graphs: Faithful and Interpretable Large Language Model Reasoning},
author={Luo, Linhao and Li, Yuan-Fang and Haffari, Gholamreza and Pan, Shirui},
booktitle={International Conference on Learning Representations},
year={2024}
}