Model Card for EUPE
Running AI models on smart edge devices can unlock various user experiences, but presents challenges due to limited compute and the need to handle multiple tasks simultaneously. This requires a vision encoder with small size but powerful and versatile representations. We present our method, Efficient Universal Perception Encoder (EUPE), which offers both inference efficiency and universally good representations for diverse downstream tasks. We achieve this by distilling from multiple domain-expert foundation vision encoders. Unlike previous agglomerative methods that directly scale down from multiple teachers to an efficient encoder, we demonstrate the importance of first scaling up to a large proxy teacher and then distilling from this single teacher. Experiments show that EUPE achieves on-par or better performance than individual domain experts of the same size on diverse task domains and also outperforms previous agglomerative encoders.
Model Details
These are Vision Transformer and ConvNeXt models trained following the method described in the EUPE paper. 6 models are provided:
- 3 ViT models including ViT-B16, ViT-S16, ViT-T16
- 3 ConvNeXt models including ConvNeXt-{T/S/B}
Each Transformer-based model takes an image as input and returns a class token, patch tokens. These models follow a ViT architecture, with a patch size of 16. For a 224x224 image, this results in 1 class token + 196 patch tokens = 197 tokens.
The models can accept larger images provided the image shapes are multiples of the patch size (16). If this condition is not verified, the model will crop to the closest smaller multiple of the patch size.
Model Description
- Developed by: Meta AI
- Model type: Vision Transformer, ConvNeXt
- License: FAIR Research License
Model Sources
- Repository: https://github.com/facebookresearch/eupe
- Paper: https://arxiv.org/abs/2603.22387
Uses
The models are vision backbones providing multi-purpose features for downstream tasks, especially suitable for multi-task setting under limited compute budget. The models can be used without fine-tuning, with downstream modules ranging from non-parametric operators, simple linear layers to heavier language decoders, to obtain competitive results:
- on image classification, using k-NN classifiers on the class token
- on semantic 3D keypoint correspondances
- on depth estimation, semantic segmentation, using linear layers
- on visual question answering, connecting with language models
Get Started
Follow the Installation to set up the environment. Clone the EUPE repo and download the PyTorch model checkpoints to local. The example below demonstrates how to obtain the class token and patch tokens given an input image.
import torch
import torchvision
from torchvision.transforms import v2
REPO_DIR = <PATH/TO/A/LOCAL/DIRECTORY/WHERE/THE/EUPE/REPO/WAS/CLONED>
def get_img():
import requests
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
return image
def make_transform(resize_size: int = 256):
to_tensor = v2.ToImage()
resize = v2.Resize((resize_size, resize_size), antialias=True)
to_float = v2.ToDtype(torch.float32, scale=True)
normalize = v2.Normalize(
mean=(0.485, 0.456, 0.406),
std=(0.229, 0.224, 0.225),
)
return v2.Compose([to_tensor, resize, to_float, normalize])
model = torch.hub.load(REPO_DIR, 'eupe_vits16', source='local', weights=<PATH/TO/THE/LOCAL/CHECKPOINT>)
img_size = 256
img = get_img()
transform = make_transform(img_size)
with torch.inference_mode():
with torch.autocast('cuda', dtype=torch.bfloat16):
batch_img = transform(img)[None]
outputs = model.forward_features(batch_img)
clstoken, patchtokens = outputs["x_norm_clstoken"], outputs["x_norm_patchtokens"]
Results
The reader is referred to the associated paper for details on the evaluation protocols.
Results for ViT backbones
| Model | #Params | Image Understanding | Vision Language Modeling | Dense Prediction | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| IN1k-ZS | IN1k-KNN | TextVQA | SQA | Realworld | POPE | GQA | MMEp | SPair | NYUv2↓ | ADE20k | ||
| EUPE-ViT-T | 6M | 50.5 | 66.3 | 42.0 | 69.5 | 50.0 | 82.4 | 61.4 | 1258.0 | 37.2 | 0.571 | 36.7 |
| EUPE-ViT-S | 20M | 69.8 | 78.2 | 44.1 | 69.3 | 51.7 | 84.5 | 65.0 | 1304.9 | 46.5 | 0.455 | 46.6 |
| EUPE-ViT-B | 86M | 79.7 | 84.1 | 50.4 | 69.7 | 55.5 | 85.9 | 67.3 | 1374.5 | 51.3 | 0.391 | 52.4 |
*Results for ConvNeXt backbones
| Model | #Params | Vision Language Modeling | Dense Prediction | |||||||
|---|---|---|---|---|---|---|---|---|---|---|
| TextVQA | SQA | Realworld | POPE | GQA | MMEp | SPair | NYUv2↓ | ADE20k | ||
| EUPE-ConvNeXt-T | 29M | 43.7 | 68.8 | 47.9 | 83.4 | 63.0 | 1278.1 | 41.3 | 0.430 | 43.5 |
| EUPE-ConvNeXt-S | 50M | 45.0 | 68.9 | 50.5 | 84.0 | 64.7 | 1284.2 | 40.1 | 0.388 | 46.8 |
| EUPE-ConvNeXt-B | 89M | 46.4 | 70.1 | 53.3 | 84.7 | 65.8 | 1348.9 | 37.7 | 0.365 | 48.9 |
Citation
BibTeX
@misc{zhu2026eupe,
title={Efficient Universal Perception Encoder},
author={Zhu, Chenchen and Suri, Saksham and Jose, Cijo and Oquab, Maxime and Szafraniec, Marc and Wen, Wei and Xiong, Yunyang and Labatut, Patrick and Bojanowski, Piotr and Krishnamoorthi, Raghuraman and Chandra, Vikas},
year={2026},
eprint={2603.22387},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2603.22387},
}