OpenAI has rolled out two new AI models, o3 and o4‑mini, that can literally “think with images,” marking a big step forward in how machines understand pictures. These models, announced in an OpenAI press release, can reason about images the same way they do about text — cropping, zooming, and rotating photos as part of their internal thought process.
At the heart of this update is the ability to blend visual and verbal reasoning.
“OpenAI o3 and o4‑mini represent a significant breakthrough in visual perception by reasoning with images in their chain of thought,” the company said in its press release. Unlike past versions, these models don’t rely on separate vision systems — instead, they natively mix image tools and text tools for richer, more accurate answers.
How does ‘thinking with images’ work?
The models can crop, zoom, rotate, or flip an image as part of their thinking process, just like humans would. They’re not just recognizing what’s in a photo but working with it to draw conclusions.
The company notes that “ChatGPT’s enhanced visual intelligence helps you solve tougher problems by analyzing images more thoroughly, accurately, and reliably than ever before.”
This means if you upload a photo of a handwritten math problem, a blurry sign, or a complicated chart, the model can not only understand it, but also break it down step by step — possibly even better than before.
Outperforms previous models in key benchmarks
These new abilities aren’t just impressive in theory; OpenAI says both models outperform their predecessors regarding top academic and AI benchmarks.
“Our models set new state-of-the-art performance in STEM question-answering (MMMU, MathVista), chart reading and reasoning (CharXiv), perception primitives (VLMs are Blind), and visual search (V*),” the company noted in a statement. “On V*, our visual reasoning approach achieves 95.7% accuracy, largely solving the benchmark.”
But the models aren’t perfect. OpenAI admits the models can sometimes overthink, leading to prolonged and unnecessary image manipulations. There are also cases where the AI might misinterpret what it sees, despite correctly using tools to analyze the image. The company also warned of reliability issues when trying the same task multiple times.
Who can use OpenAI o3 and o4-mini?
As of April 16, both o3 and o4-mini are available to ChatGPT Plus, Pro, and Team users; they replace older models like o1 and o3-mini. Enterprise and education users will get access next week, and free users can try o4-mini through a new “Think” feature.