LiamLian0727 nielsr HF Staff commited on
Commit
9248421
·
verified ·
1 Parent(s): ea2aa99

Improve dataset card: Add paper/project/code links, sample usage, and update metadata (#2)

Browse files

- Improve dataset card: Add paper/project/code links, sample usage, and update metadata (99a62ff1129243831cf1a9248e831bc287da0ed1)


Co-authored-by: Niels Rogge <nielsr@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +63 -6
README.md CHANGED
@@ -1,14 +1,23 @@
1
  ---
2
- license: apache-2.0
3
- task_categories:
4
- - question-answering
5
  language:
6
  - en
7
  - zh
 
8
  size_categories:
9
  - 10K<n<100K
 
 
 
 
 
 
 
10
  ---
11
 
 
 
 
 
12
  Spatial intelligence spans a rich suite of abilities, including visualising and transforming shapes, mentally rotating objects, judging relational positions and containment, and estimating numerosity.
13
 
14
  However, it still remains a critical unresolved challenge for Multimodal Large Language Models (MLLMs).
@@ -17,13 +26,62 @@ To fill this gap, we propose to **treat Euclidean geometry problem-solving as a
17
 
18
  To enable the model to acquire and apply Euclidean principles from these geometry problems, we employed GRPO to finetune the Qwen2.5VL family and RoboBrain2.0 family, inspiring the models to identify shapes, count, and relate entities, and perform multi-step deductive reasoning using Euclidean principles.
19
 
20
- Our experiments demonstrate that the resulting models achieve substantial zero-shot gains across four spatial reasoning benchmarks (Super-CLEVR, Omni3DBench, VSI-Bench, and MindCube) without any task-specific adaptations. Notably, after training on the Euclid30K, the mean VSI‑Bench accuracy of all evaluated models rose from 34.5% to 40.5%, improving by 5.5 percentage points. Among them, RoboBrain2.0-Euclid‑7B achieves 49.6% accuracy, surpassing the previous state‑of‑the‑art model, Spatial‑MLLM.
21
 
22
  To our knowledge, this is the first systematic study showing that geometry-centric fine-tuning can confer vision-language models with broadly transferable spatial skills.
23
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
  ### Citation
25
  If you find our dataset useful for your research, please cite us:
26
- ```
27
  @misc{Euclids_Gift,
28
  title={Euclid’s Gift: Enhancing Spatial Perception and Reasoning in Vision-Language Models via Geometric Surrogate Tasks},
29
  author={Shijie Lian and Changti Wu and Laurence Tianruo Yang and Hang Yuan and Bin Yu and Lei Zhang and Kai Chen},
@@ -33,5 +91,4 @@ If you find our dataset useful for your research, please cite us:
33
  primaryClass={cs.CV},
34
  url={https://arxiv.org/abs/2509.24473}
35
  }
36
-
37
  ```
 
1
  ---
 
 
 
2
  language:
3
  - en
4
  - zh
5
+ license: apache-2.0
6
  size_categories:
7
  - 10K<n<100K
8
+ task_categories:
9
+ - image-text-to-text
10
+ tags:
11
+ - geometry
12
+ - spatial-reasoning
13
+ - multimodal
14
+ - vlm
15
  ---
16
 
17
+ # Euclid30K Dataset
18
+
19
+ [Paper](https://huggingface.co/papers/2509.24473) | [Project Page](https://zgca-ai4edu.github.io/Euclids_Gift/) | [Code](https://github.com/LiamLian0727/Euclids_Gift)
20
+
21
  Spatial intelligence spans a rich suite of abilities, including visualising and transforming shapes, mentally rotating objects, judging relational positions and containment, and estimating numerosity.
22
 
23
  However, it still remains a critical unresolved challenge for Multimodal Large Language Models (MLLMs).
 
26
 
27
  To enable the model to acquire and apply Euclidean principles from these geometry problems, we employed GRPO to finetune the Qwen2.5VL family and RoboBrain2.0 family, inspiring the models to identify shapes, count, and relate entities, and perform multi-step deductive reasoning using Euclidean principles.
28
 
29
+ Our experiments demonstrate that the resulting models achieve substantial zero-shot gains across four spatial reasoning benchmarks (Super-CLEVR, Omni3DBench, VSIBench, and MindCube) without any task-specific adaptations. Notably, after training on the Euclid30K, the mean VSI‑Bench accuracy of all evaluated models rose from 34.5% to 40.5%, improving by 5.5 percentage points. Among them, RoboBrain2.0-Euclid‑7B achieves 49.6% accuracy, surpassing the previous state‑of‑the‑art model, Spatial‑MLLM.
30
 
31
  To our knowledge, this is the first systematic study showing that geometry-centric fine-tuning can confer vision-language models with broadly transferable spatial skills.
32
 
33
+ ## Sample Usage
34
+
35
+ Below are instructions and code snippets for setting up the environment, training, and evaluation, adapted from the official [GitHub repository](https://github.com/LiamLian0727/Euclids_Gift).
36
+
37
+ ### 1) Environment Setup
38
+
39
+ **Training**
40
+ - Install [EasyR1](https://github.com/hiyouga/EasyR1) following the official documentation.
41
+ - Install the required Python dependencies:
42
+ ```bash
43
+ pip install -r requirements.txt
44
+ ```
45
+ - Download the Euclid30K dataset from Hugging Face: https://huggingface.co/datasets/LiamLian0727/Euclid30K
46
+
47
+ **Evaluation**
48
+ - Install [lmms‑eval](https://github.com/EvolvingLMMs-Lab/lmms-eval) following its official documentation. You can either:
49
+ - Use the [`lmms-eval/`](https://github.com/EvolvingLMMs-Lab/lmms-eval) copy included in the GitHub repository; or
50
+ - Copy the four task folders provided under `test/lmms_eval/tasks/` from the GitHub repository into your existing lmms‑eval setup.
51
+ - Download the benchmark datasets [Super‑CLEVR](https://huggingface.co/datasets/MMInstruction/SuperClevr_Val), [Omni3DBench](https://huggingface.co/datasets/dmarsili/Omni3D-Bench), [VSI‑Bench](https://huggingface.co/datasets/nyu-visionx/VSI-Bench), and [MindCube_lmms_eval](https://huggingface.co/datasets/LiamLian0727/MindCube_lmms_eval); then update the dataset paths in each corresponding YAML under `test/lmms_eval/tasks/`.
52
+
53
+ ### 2) Training
54
+
55
+ Below is an example command for training (e.g., 8 GPUs). For multi‑node multi‑GPU training, refer to the example script [`train/dist_train.sh`](https://github.com/LiamLian0727/Euclids_Gift/blob/main/train/dist_train.sh) in the GitHub repository.
56
+
57
+ ```bash
58
+ python3 -m verl.trainer.main \
59
+ config=examples/config.yaml \
60
+ data.train_files=/mnt/datasets/Euclid30K/Euclid30K_train.parquet \
61
+ data.val_files=/mnt/datasets/Euclid30K/Euclid30K_val.parquet \
62
+ worker.actor.model.model_path=/mnt/models/Qwen2.5-VL-7B-Instruct \
63
+ trainer.experiment_name=EXPERIMENT_NAME \
64
+ worker.actor.micro_batch_size_per_device_for_update=1 \
65
+ worker.actor.micro_batch_size_per_device_for_experience=8 \
66
+ worker.actor.clip_ratio_low=0.2 \
67
+ worker.actor.clip_ratio_high=0.28 \
68
+ worker.reward.reward_function=/mnt/code/Euclids_Gift/train/euclid.py:compute_score \
69
+ algorithm.online_filtering=True \
70
+ trainer.total_epochs=10 \
71
+ trainer.n_gpus_per_node=8 \
72
+ trainer.nnodes=2 \
73
+ trainer.save_checkpoint_path=/mnt/models/Qwen2.5-VL-7B-Euclid
74
+ ```
75
+
76
+ ### 3) Evaluation
77
+
78
+ Use [`test/eval_qwen.sh`](https://github.com/LiamLian0727/Euclids_Gift/blob/main/test/eval_qwen.sh), [`test/eval_robo.sh`](https://github.com/LiamLian0727/Euclids_Gift/blob/main/test/eval_robo.sh), and [`test/eval_euclid.sh`](https://github.com/LiamLian0727/Euclids_Gift/blob/main/test/eval_euclid.sh) from the GitHub repository to evaluate the Qwen2.5‑VL series, the RoboBrain 2.0 series, and Euclid models trained on Euclid30K, respectively.
79
+
80
+ Before running these scripts, set `model_path` in each script to the path of the model you want to evaluate.
81
+
82
  ### Citation
83
  If you find our dataset useful for your research, please cite us:
84
+ ```bibtex
85
  @misc{Euclids_Gift,
86
  title={Euclid’s Gift: Enhancing Spatial Perception and Reasoning in Vision-Language Models via Geometric Surrogate Tasks},
87
  author={Shijie Lian and Changti Wu and Laurence Tianruo Yang and Hang Yuan and Bin Yu and Lei Zhang and Kai Chen},
 
91
  primaryClass={cs.CV},
92
  url={https://arxiv.org/abs/2509.24473}
93
  }
 
94
  ```