Taichi NeRFs: A Comprehensive Guide
Overview
Taichi NeRFs is a project that combines the power of PyTorch and Taichi to implement a NeRF (Neural Radiance Field) training pipeline. This implementation is inspired by the "instant-ngp" concept that seeks to accelerate NeRF training and create more efficient 3D models from 2D images or video. If you're interested in diving deeper into the specifics of the modeling, you can explore a detailed article here.
Installation
To get started with Taichi NeRFs, you will need to perform a few installations:
-
Install PyTorch by running:
python -m pip install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu116
Ensure you replace the URL segment with your CUDA Toolkit version number.
-
Install the latest nightly build of Taichi using:
pip install -U pip && pip install -i https://pypi.taichi.graphics/simple/ taichi-nightly
-
Install any other required packages with:
pip install -r requirements.txt
-
For those who wish to train using their own videos, installing COLMAP is necessary, which can be done via:
sudo apt install colmap
Alternatively, follow other instructions from the COLMAP website.
Training with Preprocessed Datasets
Synthetic NeRF
To explore NeRFs using a synthetic dataset, you will need to:
- Download the Synthetic NeRF dataset.
- Unzip the dataset without changing the folder name.
For example, you can train the Lego scene by running:
./scripts/train_nsvf_lego.sh
Training on an RTX 3090 GPU with Ubuntu 20.04 displays average performance, such as achieving an average PSNR of 35.0 in 208 seconds over 20 epochs.
To optimize further:
- Consider using a Linux workstation with an RTX 3090 GPU.
- Uncomment the
--half2_opt
in the script for half2 optimization which is supported on specific hardware configurations.
360_v2 Dataset
To use the 360_v2 dataset:
- Download and unzip the 360 v2 dataset without changing the folder name.
Execute the following script:
./scripts/train_360_v2_garden.sh
For optimal results on an RTX 3090, ensure that the batch_size
is set appropriately. The default of 8192
might require adjustment based on your hardware constraints.
Training with Your Own Video
Simply place your video in the data
directory and modify the script with necessary parameters like scale
and video_fps
for optimal image generation. Use the command:
./scripts/train_from_video.sh -v {your_video_name} -s {scale} -f {video_fps}
This will preprocess your video and initiate NeRF training.
Mobile Deployment
Taichi NeRFs provides an exciting feature of deploying NeRF rendering onto mobile devices via Taichi AOT. This enables real-time interaction on devices like the iPad Pro (M1) and iPhone 14 Pro Max, achieving high frame rates.
Text to 3D
Taichi NeRFs also acts as a backend for text-to-3D applications like stable-dreamfusion.
Frequently Asked Questions (FAQ)
-
Q: Is CUDA the only supported Taichi backend?
A: While CUDA is optimized for efficient operation with PyTorch, it's possible to switch to the Taichi Vulkan backend, especially if you do not require PyTorch interoperability.
-
Q: What should I do if I encounter an Out of Memory (OOM) error on my GPU?
A: You can adjust the
batch_size
intrain.py
. By default, it is set for an RTX3090 at8192
. For less powerful GPUs like an RTX3060Ti, a reduction tobatch_size=2048
is advisable.
Acknowledgements
The project draws considerable inspiration from: