-
-
-
-
-
-
Inference Providers
Active filters:
fp8
RamManavalan/Qwen3-VL-Embedding-8B-FP8
Feature Extraction
•
9B
•
Updated
•
121k
•
1
cerebras/MiniMax-M2.1-REAP-172B-A10B
Text Generation
•
173B
•
Updated
•
6.79k
•
9
richarddavison/DeepSeek-OCR-2-FP8
Feature Extraction
•
3B
•
Updated
•
2.64k
•
4
Iori2333/Step3-VL-10B-FP8
Image-Text-to-Text
•
10B
•
Updated
•
197
•
1
ykarout/Z-Image-Turbo-FP8-Full
Text-to-Image
•
Updated
•
84
•
1
FriendliAI/Meta-Llama-3-8B-Instruct-fp8
Text Generation
•
8B
•
Updated
•
23
•
2
RedHatAI/Meta-Llama-3-8B-Instruct-FP8
Text Generation
•
8B
•
Updated
•
2.14k
•
•
24
RedHatAI/Mixtral-8x7B-Instruct-v0.1-AutoFP8
Text Generation
•
47B
•
Updated
•
11
•
3
anyisalin/Meta-Llama-3-8B-Instruct-FP8
Text Generation
•
8B
•
Updated
•
3
anyisalin/Meta-Llama-3-8B-Instruct-FP8-D
Text Generation
•
8B
•
Updated
•
3
anyisalin/lzlv_70b_fp16_hf-FP8-D
Text Generation
•
69B
•
Updated
•
2
anyisalin/Meta-Llama-3-70B-Instruct-FP8-D
Text Generation
•
71B
•
Updated
•
7
anyisalin/Mixtral-8x7B-Instruct-v0.1-FP8-D
Text Generation
•
47B
•
Updated
•
3
pcmoritz/Mixtral-8x7B-v0.1-fp8-act-scale
Text Generation
•
47B
•
Updated
•
3
anyisalin/Meta-Llama-3-70B-Instruct-FP8
Text Generation
•
71B
•
Updated
•
3
RedHatAI/Meta-Llama-3-8B-Instruct-FP8-KV
Text Generation
•
8B
•
Updated
•
9.69k
•
•
8
comaniac/Meta-Llama-3-8B-Instruct-FP8-v1
Text Generation
•
8B
•
Updated
•
4
comaniac/Mixtral-8x22B-Instruct-v0.1-FP8-v1
Text Generation
•
141B
•
Updated
•
6
RedHatAI/Meta-Llama-3-70B-Instruct-FP8
Text Generation
•
71B
•
Updated
•
692
•
•
13
comaniac/Meta-Llama-3-70B-Instruct-FP8-v1
Text Generation
•
71B
•
Updated
•
5
comaniac/Mixtral-8x7B-Instruct-v0.1-FP8-v1
Text Generation
•
47B
•
Updated
•
3
comaniac/Mixtral-8x7B-Instruct-v0.1-FP8-v2
Text Generation
•
47B
•
Updated
•
5
Skywork/Skywork-MoE-Base-FP8
Text Generation
•
146B
•
Updated
•
76
•
7
RedHatAI/Qwen2-72B-Instruct-FP8
Text Generation
•
73B
•
Updated
•
693
•
15
comaniac/Meta-Llama-3-70B-Instruct-FP8-v2
Text Generation
•
71B
•
Updated
•
3
comaniac/Mixtral-8x7B-Instruct-v0.1-FP8-v3
Text Generation
•
47B
•
Updated
•
3
comaniac/Mixtral-8x22B-Instruct-v0.1-FP8-v2
Text Generation
•
141B
•
Updated
•
14
RedHatAI/Mixtral-8x22B-Instruct-v0.1-AutoFP8
Text Generation
•
141B
•
Updated
•
4
•
3
Text Generation
•
8B
•
Updated
•
2
RedHatAI/Qwen2-0.5B-Instruct-FP8
Text Generation
•
0.5B
•
Updated
•
283
•
3