Xiaomi MiMo
From Wikipedia, the free encyclopedia
April 22, 2026
MiMo V2.5 /
April 22, 2026
| Xiaomi MiMo | |
|---|---|
| Developer | Xiaomi |
| Initial release | April 30, 2025 |
| Stable release | MiMo V2.5 Pro / April 22, 2026 MiMo V2.5 / April 22, 2026 |
| Platform | Cloud computing platforms |
| Available in | English and other languages |
| Type | |
| License | Various; see § List of models |
| Website | mimo mimo |
Xiaomi MiMo is a family of large language models (LLMs) developed by Xiaomi.[1][2] It was initially released in April 2025 with the MiMo-7B model.[3] Currently, MiMo is available for developers through API service. It is used as the key AI model in Xiaomi's "Human x Car x Home" ecosystem.[4]
Xiaomi developed MiMo as a reasoning-focused language model. Its development team was led by Luo Fuli, who had previously worked at DeepSeek before joining Xiaomi in late 2025.[5][6] The model was trained using multi-token prediction and reinforcement learning, with a particular emphasis on mathematical reasoning and code generation tasks.[7] In March 2026, Xiaomi CEO Lei Jun announced that the company planned to invest at least US$8.7 billion in artificial intelligence over the following three years.[8]
Models
List of models
| Model | Release date | Parameters | License | Knowledge cutoff | Ref. |
|---|---|---|---|---|---|
| MiMo-7B | 30 April 2025 | 7 Billion | MIT | Unknown | |
| MiMo-V2-Flash | 17 December 2025 | 309 Billion | MIT | December 2024 | |
| MiMo-V2-Pro | 18 March 2026 | 1 Trillion | Proprietary | May 2025 | |
| MiMo-V2-Omni | 18 March 2026 | Unknown | Proprietary | May 2025 | |
| MiMo-V2-TTS | 18 March 2026 | Unknown | Proprietary | N/a | |
| MiMo-V2.5 | 22 April 2026 | 310 Billion | MIT | May 2025 | |
| MiMo-V2.5-Pro | 22 April 2026 | 1.02 Trillion | MIT | May 2025 |
MiMo-7B
MiMo-7B is the first model of this LLM. The base model, MiMo-7B-Base, was pre-trained on approximately 25 trillion tokens using web pages, academic papers, books, and synthetic reasoning data.[7] MiMo-7B-RL underwent supervised fine-tuning and reinforcement learning on 130,000 mathematics and code problems.[7]
MiMo-7B-RL-0530 was released in May 2025. It scaled the fine-tuning dataset from 500,000 to 6 million instances and extended the RL window from 32,000 to 48,000 tokens and improved AIME 2024 scores from 68.2 to 80.1.[7]
MiMo-VL-7B was a vision-language model combining a Vision Transformer encoder with the MiMo-7B backbone. It was trained in four stages consuming 2.4 trillion tokens.[9] Its reinforcement learning variant used Mixed On-Policy Reinforcement Learning (MORL) which integrated reward signals across perception, grounding, and reasoning.[9] Xiaomi also released MiMo-Audio-7B, an audio-language model for voice conversion, style transfer, and speech editing.[10][11]
MiMo-V2-Flash
MiMo-V2-Flash was launched in December 2025.[12] It is a open-sourced Mixture-of-experts model with 309 billion total parameters and 15 billion active parameters. It was trained on 27 trillion tokens using FP8 mixed precision.[13][14] It used hybrid attention interleaving Sliding Window and Global Attention at a 5:1 ratio.[13]
MiMo-V2-Pro
Xiaomi publicly introduced MiMo-V2-Pro on 18 March 2026. It has over 1 trillion total parameters, 42 billion active, and a 1-million-token context window.[15] Before the official release, the model had appeared anonymously on OpenRouter under the codename "Hunter Alpha," where it drew substantial usage and topped daily charts for several days, according to Xiaomi and Reuters.[16] During its listing on OpenRouter, the model reportedly processed over one trillion tokens in total usage. Xiaomi later said Hunter Alpha was an early internal test build of MiMo-V2-Pro, and Reuters reported that the model had been mistaken by some users for a possible DeepSeek system before Xiaomi confirmed its origin.[16]
The model was released as a proprietary API product, and Luo Fuli stated that Xiaomi intended to open-source a variant at an unspecified future date. Xiaomi has partnered with several API web platforms like OpenClaw to launch the model. All these websites initially offered a free trial of this model for a week, but due to the overwhelming response, Xiaomi later extended the free trial period of the model until 2 April 2026.[17]
MiMo-V2-Omni
Alongside MiMo-V2-Pro, Xiaomi launched MiMo-V2-Omni on 18 March 2026. It handles image, video, audio, and text inputs. Before the official release, it was codenamed "Healer Alpha" in OpenRouter.[15]
MiMo-V2-TTS
On the same date as the release of MiMo-V2-Pro and MiMo-V2-Omni, a Text-to-Speech model named MiMo-V2-TTS was released also. It is a speech synthesis model. It was trained on audio data, which makes it capable of emotional transitions, mid-sentence tone shifts, singing, and synthesis of regional dialects like Sichuan, Cantonese, Henan, and Taiwanese.[15]
Licensing
Xiaomi has used different licensing approaches for different models in the MiMo family. The MiMo-7B series and MiMo-V2-Flash were released as open-weight models. MiMo-V2-Flash was published under the MIT license with model weights and inference code available on Hugging Face.[18]
MiMo-V2-Pro and MiMo-V2-Omni were released as proprietary models. It was accessible through Xiaomi's API platform and third-party API providers. Luo Fuli stated that Xiaomi intended to open-source a variant of MiMo-V2-Pro. Although, she didn't specified any timeline.[16] MiMo-V2-TTS was released as a proprietary model with no publicly available weights.