arcada-labs commited on
Commit
ddfc95d
·
verified ·
1 Parent(s): 80bd3db

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +143 -2
README.md CHANGED
@@ -1,4 +1,23 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  configs:
3
  - config_name: default
4
  data_files:
@@ -6,6 +25,128 @@ configs:
6
  path: metadata.jsonl
7
  ---
8
 
9
- # appointment_bench
10
 
11
- 25-turn dual-appointment benchmark: two patients (Daniel/Danielle Nolan), two doctors (Perry/Barry), phone number swap+revert, 3 false memory traps, slot-taken error recovery, cross-entity state tracking
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ license: mit
5
+ pretty_name: Appointment Bench
6
+ tags:
7
+ - audio
8
+ - benchmark
9
+ - speech-to-speech
10
+ - voice-ai
11
+ - multi-turn
12
+ - tool-use
13
+ - evaluation
14
+ - state-tracking
15
+ - function-calling
16
+ task_categories:
17
+ - automatic-speech-recognition
18
+ - text-generation
19
+ size_categories:
20
+ - n<1K
21
  configs:
22
  - config_name: default
23
  data_files:
 
25
  path: metadata.jsonl
26
  ---
27
 
28
+ # Appointment Bench
29
 
30
+ **25-turn multi-turn speech-to-speech benchmark** for evaluating voice AI models as a dental office receptionist handling appointment scheduling.
31
+
32
+ Part of [Audio Arena](https://audioarena.ai), a suite of 6 benchmarks spanning 221 turns across different domains. Built by [Arcada Labs](https://arcada.dev).
33
+
34
+ [Leaderboard](https://audioarena.ai/leaderboard) | [GitHub](https://github.com/Design-Arena/audio-arena) | [All Benchmarks](#part-of-audio-arena)
35
+
36
+ ## Dataset Description
37
+
38
+ The model acts as a dental office receptionist scheduling appointments for two patients with confusable names (Daniel and Danielle Nolan) across two doctors (Dr. Perry and Dr. Barry). The conversation includes phone number swaps and reverts, slot-taken error recovery, and cross-entity state tracking where information from one patient's booking must not leak into another's.
39
+
40
+ ## What This Benchmark Tests
41
+
42
+ - **Tool use**: 4 functions — appointment booking, availability lookup, patient record retrieval, schedule management
43
+ - **Confusable entity disambiguation**: Two patients with near-identical names (Daniel/Danielle Nolan), two doctors (Perry/Barry)
44
+ - **Phone number swap and revert**: Correction followed by reverting back to the original
45
+ - **False memory traps**: 4 turns that assert things the model never said or did
46
+ - **Slot-taken error recovery**: Handling booking conflicts when a requested slot is already occupied
47
+ - **Cross-entity state tracking**: Keeping two patients' details separate across the full conversation
48
+
49
+ ## Dataset Structure
50
+
51
+ ```
52
+ appointment-bench/
53
+ ├── audio/ # TTS-generated audio (1 WAV per turn)
54
+ │ ├── turn_000.wav
55
+ │ ├── turn_001.wav
56
+ │ └── ... (25 files)
57
+ ├── real_audio/ # Human-recorded audio
58
+ │ ├── person1/
59
+ │ │ └── turn_000.wav ... turn_024.wav
60
+ │ └── person2/
61
+ │ └── turn_000.wav ... turn_024.wav
62
+ ├── benchmark/
63
+ │ ├── turns.json # Turn definitions with golden answers
64
+ │ ├── hard_turns.json # Same as turns.json but input_text=null (audio-only)
65
+ │ ├── tool_schemas.json # Tool/function schemas (4 tools)
66
+ │ └── knowledge_base.txt # Dental office KB
67
+ └── metadata.jsonl # HF dataset viewer metadata
68
+ ```
69
+
70
+ ### Metadata Fields
71
+
72
+ | Field | Description |
73
+ |-------|-------------|
74
+ | `file_name` | Path to the audio file |
75
+ | `turn_id` | Turn index (0–24) |
76
+ | `speaker` | `tts`, `person1`, or `person2` |
77
+ | `input_text` | What the user says (text transcript) |
78
+ | `golden_text` | Expected assistant response |
79
+ | `required_function_call` | Tool call the model should make (JSON, nullable) |
80
+ | `function_call_response` | Scripted tool response (JSON, nullable) |
81
+ | `categories` | Evaluation categories for this turn |
82
+ | `subcategory` | Specific sub-skill being tested |
83
+ | `scoring_dimensions` | Which judge dimensions apply |
84
+
85
+ ## Audio Format
86
+
87
+ - **Format**: WAV, 16-bit PCM, mono
88
+ - **TTS audio**: Generated via text-to-speech
89
+ - **Real audio**: Human-recorded by multiple speakers, same transcript content
90
+
91
+ ## Usage
92
+
93
+ ### With Audio Arena CLI
94
+
95
+ ```bash
96
+ pip install audio-arena # or: git clone + uv sync
97
+
98
+ # Run with a text model
99
+ uv run audio-arena run appointment_bench --model claude-sonnet-4-5 --service anthropic
100
+
101
+ # Run with a speech-to-speech model
102
+ uv run audio-arena run appointment_bench --model gpt-realtime --service openai-realtime
103
+
104
+ # Judge the results
105
+ uv run audio-arena judge runs/appointment_bench/<run_dir>
106
+ ```
107
+
108
+ ### With Hugging Face Datasets
109
+
110
+ ```python
111
+ from datasets import load_dataset
112
+
113
+ ds = load_dataset("arcada-labs/appointment-bench")
114
+ ```
115
+
116
+ ## Evaluation
117
+
118
+ Models are judged on up to 5 dimensions per turn:
119
+
120
+ | Dimension | Description |
121
+ |-----------|-------------|
122
+ | `tool_use_correct` | Correct function called with correct arguments |
123
+ | `instruction_following` | User's request was actually completed |
124
+ | `kb_grounding` | Claims are supported by the knowledge base or tool results |
125
+ | `state_tracking` | Consistency with earlier turns (scored on tagged turns only) |
126
+ | `ambiguity_handling` | Correct disambiguation (scored on tagged turns only) |
127
+
128
+ For speech-to-speech models, a 6th `turn_taking` dimension evaluates audio timing correctness.
129
+
130
+ See the [full methodology](https://github.com/Design-Arena/audio-arena#methodology) for details on two-phase evaluation, penalty absorption, and category-aware scoring.
131
+
132
+ ## Part of Audio Arena
133
+
134
+ | Benchmark | Turns | Scenario |
135
+ |-----------|-------|----------|
136
+ | [Conversation Bench](https://huggingface.co/datasets/arcada-labs/conversation-bench) | 75 | Conference assistant |
137
+ | **Appointment Bench** (this dataset) | 25 | Dental office scheduling |
138
+ | [Assistant Bench](https://huggingface.co/datasets/arcada-labs/assistant-bench) | 31 | Personal assistant |
139
+ | [Event Bench](https://huggingface.co/datasets/arcada-labs/event-bench) | 29 | Event planning |
140
+ | [Grocery Bench](https://huggingface.co/datasets/arcada-labs/grocery-bench) | 30 | Grocery ordering |
141
+ | [Product Bench](https://huggingface.co/datasets/arcada-labs/product-bench) | 31 | Laptop comparison shopping |
142
+
143
+ ## Citation
144
+
145
+ ```bibtex
146
+ @misc{audioarena2026,
147
+ title={Audio Arena: Multi-Turn Speech-to-Speech Evaluation Benchmarks},
148
+ author={Arcada Labs},
149
+ year={2026},
150
+ url={https://audioarena.ai}
151
+ }
152
+ ```