Can someone update the Mercor article with information about the company's AI benchmark report? The field of AI benchmarking is attracting interest as people try to figure out how to measure the tools' effectiveness. Pinging STEMinfo the article creator and Thewtor, who has edited the article extensively.
This would go at the end of the history section:
In January 2026, the company released its first APEX-Agents AI benchmarking report, to show how successfully the leading AI models performed business tasks.
[1] and this would go at the end of the Business section.
The company also produces an AI research benchmark called APEX-Agents, which studies how effectively different AI models perform tasks in business areas including consulting, investment banking, and law.
[2][3]Thank you. Goldenhour23 (talk) 22:06, 13 March 2026 (UTC)
Goldenhour23 (talk) 22:06, 13 March 2026 (UTC)