o1 pro is OpenAI's most expensive model, but is it any good?
Автор: Plivo
Загружено: 25 мар. 2025 г.
Просмотров: 28 785 просмотров
OpenAI has released a more powerful version of its "reasoning" model, o1-pro, to developers via the API. Previously only accessible through the $200/month ChatGPT Pro plan, o1-pro is now available for direct API use — but at a steep cost: $150 per million input tokens and $600 per million output tokens. That makes it significantly more expensive than both the base o1 model and GPT-4.5.
The model is designed to use more compute to "think harder" and deliver more consistent answers on complex problems. Developers can even adjust how much reasoning effort the model applies, which in turn affects both cost and latency.
While OpenAI claims o1-pro offers improved performance, early testing from users has yielded mixed results. Some report high costs for relatively simple tasks, and in certain cases, the model has failed to deliver accurate outputs despite extensive reasoning effort. For example, one user asked OpenAI o1 pro which of three buckets a ball would land in in a visual test, and after recreating the test using Claude 3.7 Sonnet, it was clear the ball would end up in the leftmost bucket. However, OpenAI o1 pro guessed incorrectly, asserting that the ball would end up in the rightmost bucket instead. Internal benchmarks also suggest only modest improvements over the base model in areas like coding and math.

Доступные форматы для скачивания:
Скачать видео mp4
-
Информация по загрузке: