lucaswychan commited on
Commit
0c6327e
·
verified ·
1 Parent(s): e60114b

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +6 -5
README.md CHANGED
@@ -1,13 +1,14 @@
1
  ---
2
  pretty_name: t2ranking-hard-neg-reasoning-embedding
3
- library_name: sentence-transformers
4
- pipeline_tag: sentence-similarity
5
  tags:
6
  - jsonl
7
  - retrieval
 
8
  - hard-negative
 
9
  task_categories:
10
  - text-retrieval
 
11
  license: apache-2.0
12
  language:
13
  - multilingual
@@ -40,7 +41,7 @@ language:
40
 
41
  ## Introduction
42
 
43
- This is the dataset used to train the embedding models in the paper [`Do Reasoning Models Enhance Embedding Models?`](https://arxiv.org/abs/2601.21192). We use [`Qwen3-Embedding-0.6B`](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B) to mine 3 hard negatives per query, and employ the positive-aware hard negative mining technique introduced in [`NV-Retriever`](https://arxiv.org/abs/2407.15831) with 95% margin to the positive score.
44
 
45
  ## Abstract
46
 
@@ -48,13 +49,13 @@ State-of-the-art embedding models are increasingly derived from decoder-only Lar
48
 
49
  ## Dataset Structure
50
 
51
- The dataset is structured according to the [`GritLM`](https://github.com/lucaswychan/gritlm-re) repository's format: `{"query": List[str], "pos": List[str], "neg": List[str]}`.
52
 
53
  * **`query`**: This is a list containing two strings.
54
  * `query[0]` holds the instruction. A complete list of instructions can be found [here](https://github.com/HKUST-KnowComp/Reasoning-Embedding/blob/main/evaluation/task_prompts.json).
55
  * `query[1]` contains the actual query text.
56
  * **`pos`**: A list with a single string, representing the positive anchor for the query. You can add more anchors to the list.
57
- * **`neg`**: A list containing exactly three strings, which are the mined hard negatives associated with the query.
58
 
59
  For example,
60
 
 
1
  ---
2
  pretty_name: t2ranking-hard-neg-reasoning-embedding
 
 
3
  tags:
4
  - jsonl
5
  - retrieval
6
+ - similarity
7
  - hard-negative
8
+ - reasoning-embedding
9
  task_categories:
10
  - text-retrieval
11
+ - sentence-similarity
12
  license: apache-2.0
13
  language:
14
  - multilingual
 
41
 
42
  ## Introduction
43
 
44
+ This is the dataset used to train the embedding models in the paper [`Do Reasoning Models Enhance Embedding Models?`](https://arxiv.org/abs/2601.21192). We use [`Qwen3-Embedding-0.6B`](https://huggingface.co/Qwen/Qwen3-Embedding-0.6B) to mine 3 hard negatives per query, and employ the positive-aware hard negative mining technique introduced in [`NV-Retriever`](https://huggingface.co/nvidia/NV-Retriever-v1) with 95% margin to the positive score.
45
 
46
  ## Abstract
47
 
 
49
 
50
  ## Dataset Structure
51
 
52
+ The dataset is structured according to the [`GritLM`](https://github.com/lucaswychan/gritlm-re) repository's format: `{"query": List[str], "pos": List[str], "neg": List[str]}`. The script to mine the hard negatives is [here](https://github.com/HKUST-KnowComp/Reasoning-Embedding/blob/main/datasets/mine_hard_neg.py).
53
 
54
  * **`query`**: This is a list containing two strings.
55
  * `query[0]` holds the instruction. A complete list of instructions can be found [here](https://github.com/HKUST-KnowComp/Reasoning-Embedding/blob/main/evaluation/task_prompts.json).
56
  * `query[1]` contains the actual query text.
57
  * **`pos`**: A list with a single string, representing the positive anchor for the query. You can add more anchors to the list.
58
+ * **`neg`**: A list containing 1 - 3 strings, which are the mined hard negatives associated with the query.
59
 
60
  For example,
61