Classement global

33 benchmarks disponibles

Classement — IA textuelles et code

Rang Modèles Score moyen Tests Temps moy. Coût moy.
#1 productivia matania-latest 9.0/10 58 5.9 s
#2 anthropic claude-opus-4-7 8.8/10 64 14.0 s
#3 openai gpt-5.5-pro 8.8/10 63 149.1 s
#4 openai gpt-5.5 8.7/10 63 19.0 s
#5 google gemini-flash-latest 8.6/10 69 12.8 s
#6 anthropic claude-sonnet-4-6 8.5/10 74 17.5 s
#7 anthropic claude-opus-4-6 8.4/10 64 22.3 s
#8 anthropic claude-haiku-4-5-20251001 8.1/10 64 7.2 s
#9 google gemini-flash-lite-latest 8.1/10 69 3.4 s
#10 xai grok-4-1-fast-reasoning 8.0/10 64 27.2 s
#11 openai gpt-5.4-nano 7.8/10 66 13.9 s
#12 mistral mistral-small-latest 7.6/10 67 5.0 s
#13 xai grok-4-1-fast-non-reasoning 7.5/10 64 8.5 s
#14 mistral mistral-large-latest 7.5/10 68 12.8 s
#15 kimi moonshot-v1-128k 7.3/10 38 7.4 s
#16 openai gpt-4o-mini 7.2/10 69 10.0 s
#17 openai gpt-5.4-pro 7.2/10 24 278.7 s
#18 openai gpt-5.4-mini 7.2/10 26 11.3 s
#19 mistral mistral-medium-latest 6.9/10 26 15.2 s
#20 cohere command-r-08-2024 6.4/10 43 22.4 s
#21 openai gpt-5.4 6.3/10 26 26.0 s
#22 mistral mistral-tiny-latest 5.9/10 43 3.4 s

Classement — IA d'image

Rang Modèles Score moyen Tests Temps moy. Coût moy.
#1 segmind ideogram-3 8.2/10 175 14.0 s 0.04 $
#2 openai chatgpt-image-latest 7.5/10 175 46.8 s 0.21 $
#3 google imagen-4.0-ultra-generate-001 7.4/10 175 13.1 s 0.08 $
#4 google gemini-3-pro-image-preview 7.4/10 175 26.4 s < 0.01 $
#5 google imagen-4.0-generate-001 7.4/10 175 9.2 s 0.04 $
#6 google gemini-2.5-flash-image 7.3/10 167 7.7 s < 0.01 $
#7 xai grok-imagine-image-pro 7.3/10 175 16.6 s 0.07 $
#8 xai grok-imagine-image 7.1/10 183 7.7 s 0.02 $
#9 segmind seedream-4.5 7.1/10 175 19.2 s 0.04 $
#10 segmind seedream-v5-lite 6.6/10 173 39.6 s 0.04 $
#11 google imagen-4.0-fast-generate-001 6.2/10 58 4.6 s 0.02 $
Code