top of page

LLM arena leaderboard: Ranking the best LLMs

  • Falk Thomassen
  • Jan 12, 2024
  • 2 min read

Updated: May 22

The LLM arena leaderboard is an important LLM evaluation tool.


Using a dynamic ELO scoring system, the leaderboard provides insights into which models lead in multi-task capabilities, reasoning, and real-world applicability.


Let’s dive in.


Best LLM on the LLM arena leaderboard

Comparing the main frontier models on the LLM arena leaderboard.

LLM arena leaderboard of main frontier models



Last updated: May, 2025

Company

Model

Arena Score

Google

Gemini 2.5

1446

OpenAI

GPT-o3

1409

xAI

Grok-3

1399

Anthropic

Claude 3.7 Sonnet

1297

Meta

Llama 4 Maverick

1266

Google's Gemini 2.5 takes the lead for the LLM arena leaderboard with an impressive Arena Score of 1446, significantly surpassing OpenAI's GPT-o3 at 1409.


Meanwhile, xAI's Grok-3 shows strong performance with 1399, and Anthropic's Claude 3.7 Sonnet follows behind with 1297. Meta’s Llama 4 Maverick, though slightly behind at 1266, remains competitive among the top five models.



What is the LLM arena leaderboard?

The LLM arena leaderboard is a platform developed by researchers at UC Berkeley under the LMSYS (Large Model Systems Organization) initiative.


It was designed to evaluate large language models (LLMs) through direct, pairwise comparisons in conversational settings, offering a dynamic ranking system that reflects ongoing competition.


The LLM arena operates as follows:

  • Two LLMs respond to the same prompt anonymously

  • Humans choose the better response based on accuracy, coherence, and helpfulness

  • Scores update after each match to reflect performance


This leaderboard assesses LLMs on a variety of conversational and reasoning tasks, providing a comprehensive view of their capabilities.

Today, the LLM arena leaderboard serves as a key resource for tracking progress in AI development, showcasing how leading models stack up against one another.



Other LLM benchmarks

At BRACAI, we keep track of how the main frontier models perform across multiple benchmarks.


If you have any questions about these benchmarks, or how to get started with AI in your business, feel free to reach out.

Comments


bottom of page