GLOMAP: Global Structure-from-Motion Revisited
Overview
GLOMAP represents a significant advancement in the field of image-based reconstruction, specifically within the realm of global structure-from-motion techniques. This project builds upon the foundation laid by COLMAP, a well-known tool in the computer vision community, aiming to deliver faster and more efficient reconstruction processes while maintaining or surpassing the quality offered by its predecessors.
Key Features
-
Efficiency: GLOMAP boasts a remarkable improvement in speed, performing reconstructions 10 to 100 times faster than COLMAP. This makes it exceptionally suited for handling large datasets or time-sensitive applications.
-
Quality: Despite its increased speed, GLOMAP does not compromise on the quality of the reconstruction, ensuring that results are reliable and of high fidelity.
-
Compatibility: The project is seamlessly integrated with COLMAP, utilizing its database as input and outputting in the familiar COLMAP sparse reconstruction format.
Installation and Setup
To get started with GLOMAP, users need to have COLMAP installed, as it forms the basis for GLOMAP’s operations. Here’s a simplified guide to installing GLOMAP:
- Install COLMAP Dependencies: This is a prerequisite for building GLOMAP.
- Build GLOMAP: Navigate to your terminal or command line interface and execute the following commands:
Alternatively, pre-compiled binaries for Windows can be downloaded for ease of installation.mkdir build cd build cmake .. -GNinja ninja && ninja install
Usage
Running GLOMAP is straightforward once set up. Users can execute GLOMAP with a command to map images to reconstructions, specifying paths for the database, images, and desired output directory. For a detailed guide on command options, including enhancements for reconstruction, users can refer to the provided help commands glomap -h
or glomap mapper -h
.
End-to-End Example
GLOMAP can be used with a pre-existing COLMAP database or directly from image sets. Here is a step-by-step example:
-
From Existing Database: If there is already a COLMAP database, users can run:
glomap mapper --database_path ./data/gerrard-hall/database.db --image_path ./data/gerrard-hall/images --output_path ./output/gerrard-hall/sparse
-
Directly from Images: This involves first creating a database using COLMAP functions and then proceeding with GLOMAP:
colmap feature_extractor --image_path ./data/south-building/images --database_path ./data/south-building/database.db colmap exhaustive_matcher --database_path ./data/south-building/database.db glomap mapper --database_path ./data/south-building/database.db --image_path ./data/south-building/images --output_path ./output/south-building/sparse
Visualization and Further Use
The results from GLOMAP are written in a format compatible with COLMAP, making them easy to visualize using the COLMAP GUI or alternatives like rerun.io. For developers interested in programmatically interacting with the reconstruction data, tools like pycolmap
or COLMAP’s C++ library interface can be utilized.
Advanced Tips
- For large-scale datasets, it's advisable to use COLMAP's
sequential_matcher
orvocab_tree_matcher
for efficient image matching. - Learning-based descriptors and image retrieval can be facilitated with tools like hloc.
Acknowledgements and Contributions
GLOMAP draws inspiration from several existing projects such as COLMAP, PoseLib, and Theia, and acknowledges their influence. The community is encouraged to contribute through GitHub, whether by reporting issues, suggesting features, or submitting pull requests.
For those interested in discussing GLOMAP or seeking support, GitHub Discussions and the issue tracker provide a platform for engagement.
Licensing
GLOMAP is released under a permissive license, allowing for wide use and adaptation. Users are encouraged to review the license for detailed terms and conditions.
GLOMAP presents a significant leap forward in the sphere of global structure-from-motion, providing researchers and practitioners with a tool that combines performance, scalability, and ease of use in a way that addresses the needs of modern computer vision tasks.