Trustworthy AI
Trustworthy AI is an ambitious initiative spearheaded by Huawei's Noah's Ark Lab, encapsulating various works and projects focused on promoting transparency and reliability in artificial intelligence technologies. The project comprises several components designed to enhance AI's understanding and management of causality, making it a key player in the field of causal learning.
gCastle
At the heart of Trustworthy AI is the gCastle toolchain, a comprehensive software suite that streamlines causal structure learning. The system's primary focus is on discovering causal relationships through advanced techniques, predominantly utilizing gradient-based methods, aligning with its name: gradient-based Causal structure learning. A detailed technical report available on arXiv further elaborates on the toolbox's capabilities, offering a deeper dive into its functionalities.
Competition
The project is also renowned for organizing competitions that revolve around causality, showcasing and advancing research in this domain. Noah's Ark Lab has facilitated notable competitions at prestigious venues such as PCIC in 2021 and 2022, and most recently, NeurIPS in 2023. These events not only highlight innovative approaches but also provide baseline models to guide participants in their explorations.
Datasets
Trustworthy AI provides access to a rich collection of datasets developed by Huawei Noah's Ark Lab. This includes both real-world data and tools for creating synthetic datasets, catering to various research needs. These resources play a crucial role in supporting empirical research and fostering developments in causal analysis.
Research
The project is committed to continual enhancement and dissemination of novel research methods in the field of causality. Presently, it encompasses several pioneering approaches, such as CausalVAE (a model combining variational autoencoders with causality), GAE (a graph-based autoencoder framework), and methods optimizing causal discovery through reinforcement learning. These implementations underline the project's dedication to pushing the boundaries of what is possible with causal AI.
Trustworthy AI, through its various components and resources, aims to cultivate a deeper understanding of causality in AI systems. It strives to ensure that AI applications are not only powerful but also interpretable and reliable, aligning with the overarching goal of fostering trustworthy artificial intelligence.