You need to agree to share your contact information to access this model
The information you provide will be collected, stored, processed and shared in accordance with the Meta Privacy Policy.
Log in or Sign Up to review the conditions and access this model content.
Model Card for CHMv2
The Canopy Height Maps v2 (CHMv2) model is a DPT-based decoder estimating canopy height given satellite imagery, leveraging DINOv3 as the backbone. Building on our original high-resolution canopy height maps released in 2024, CHMv2 delivers substantial improvements in accuracy, detail, and global consistency.
Model Details
CHMv2 model was developed using the satellite DINOv3 ViT-L as the frozen backbone. Released with world-scale maps generated with it, they will help researchers and governments measure and understand every tree, gap, and canopy edge — enabling smarter biodiversity support and land-management decisions.
Usage With Transformers
Run inference on an image with the following code:
from PIL import Image
import torch
from transformers import CHMv2ForDepthEstimation, CHMv2ImageProcessorFast
processor = CHMv2ImageProcessorFast.from_pretrained("facebook/dinov3-vitl16-chmv2-dpt-head")
model = CHMv2ForDepthEstimation.from_pretrained("facebook/dinov3-vitl16-chmv2-dpt-head")
image = Image.open("image.tif")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
depth = processor.post_process_depth_estimation(
outputs, target_sizes=[(image.height, image.width)]
)[0]["predicted_depth"]
Model Description
- Developed by: Meta AI
- Model type: DPT head
- License: DINOv3 License
Model Sources
- Repository: https://github.com/facebookresearch/dinov3
- Paper: https://arxiv.org/abs/2603.06382
Direct Use
The model can be used without fine-tuning to obtain competitive results on various satellite datasets (paper link).
- Downloads last month
- 361
Model tree for facebook/dinov3-vitl16-chmv2-dpt-head
Unable to build the model tree, the base model loops to the model itself. Learn more.