MMMU benchmark: Testing multimodal AI for expert-level reasoning
- BRACAI
- Dec 29, 2024
- 2 min read
Updated: Jul 31
The MMMU benchmark is an important LLM evaluation tool.
It assesses their ability to handle complex tasks involving text and images, helping identify which models excel in expert-level reasoning.
Let’s dive in.
Best LLM for the MMMU benchmark
Comparing the main frontier models on the MMMU benchmark.

Last updated: July, 2025
Google’s Gemini 2.5 Pro leads the MMMU benchmark leaderboard with an impressive score of 84.0%, narrowly surpassing OpenAI’s GPT-o3 at 82.9%.
Meanwhile, Meta’s Llama 4 shows strong performance at 78.0%, while Anthropic’s Claude Sonnet 4 rounds out the top four with 74.4% and finally xAI’s.
What is the MMMU benchmark?
The MMMU benchmark stands for Massive Multi-discipline Multimodal Understanding and Reasoning.
It was introduced by Yue et al. (2024) to evaluate multimodal models on expert-level tasks that integrate text and images.
The benchmark includes 11.5K college-level questions sourced from exams, quizzes, and textbooks, covering six disciplines:
Art & design
Business
Science
Health & medicine
Humanities & social science
Tech & engineering
MMMU challenges models with tasks that require solving problems using diagrams, charts, tables, and other complex formats. It is designed to test advanced reasoning and expert-level knowledge across 30 subjects and 183 subfields.
Unlike simpler benchmarks, MMMU focuses on real-world challenges that demand deep subject understanding and deliberate reasoning.
Other LLM benchmarks
At BRACAI, we keep track of how the main frontier models perform across multiple benchmarks.
If you have any questions about these benchmarks, or how to get started with AI in your business, feel free to reach out.