Qwen
Family of large language models by Alibaba
From Wikipedia, the free encyclopedia
Qwen (also known as Tongyi Qianwen, Chinese: 通义千问; pinyin: Tōngyì Qiānwèn) is a family of large language models developed by Alibaba Cloud. Many Qwen variants are distributed as open‑weight models under the Apache‑2.0 license, while others are served through Alibaba Cloud.[1]
February 16, 2026
Qwen3.5-Plus /
February 16, 2026
Qwen3.5-Medium Models /
February 24, 2026
Qwen3-Coder-Next /
February 2, 2026
| Qwen | |
|---|---|
Screenshot Screenshot of an example of a Qwen 3 answer describing Wikipedia, with the "Thinking" feature enabled | |
| Developer | Alibaba Cloud |
| Initial release | April 2023 |
| Stable release | Qwen3.5-397B-A17B / February 16, 2026 Qwen3.5-Plus / February 16, 2026 Qwen3.5-Medium Models / February 24, 2026 Qwen3-Coder-Next / February 2, 2026 |
| Written in | Python |
| Operating system | |
| Type | Large language model, chatbot |
| License | Apache-2.0 Qwen Research License Qwen License |
| Website | chat |
| Repository | github |
| Qwen | |||||||
|---|---|---|---|---|---|---|---|
| Tongyi Qianwen | |||||||
| Traditional Chinese | 通義千問 | ||||||
| Simplified Chinese | 通义千问 | ||||||
| Literal meaning | to comprehend the meaning, [and to answer] a thousand kinds of questions | ||||||
| |||||||
In July 2024, South China Morning Post reported that benchmarking platform SuperCLUE ranked Qwen2‑72B‑Instruct behind OpenAI's GPT‑4o and Anthropic’s Claude 3.5 Sonnet and ahead of other Chinese models.[2]
Models

Transform this image into painting in the style of Picasso and Juan GrisAlibaba launched a beta of Qwen in April 2023 under the name Tongyi Qianwen, then opened it for public use in September 2023 after regulatory clearance.[3][4]
The model's architecture was based on the Llama architecture developed by Meta AI.[5][6] In December 2023, it released its 72B and 1.8B models for download, while Qwen 7B weights were released in August.[7][8] Their models are sometimes described as open source, but the training code has not been released nor has the training data been documented, and they do not meet the terms of either the Open Source AI Definition or the Model Openness Framework from the Linux Foundation.
Qwen2 was released in June 2024, and in September it released some of its models with open weights, while keeping its most advanced models proprietary.[9][10] Qwen2 contains both dense and sparse models.[11]
In November 2024, QwQ-32B-Preview, a model focusing on reasoning similar to OpenAI's o1, was released under the Apache 2.0 License, although only the weights were released, not the dataset or training method.[12][13] QwQ has a 32K token context length and performs better than o1 on some benchmarks.[14] It was also in November 2024 that the Accio application was launched.[15] Accio is an AI native application that is built upon Qwen and is used to generate market insights and answer sourcing questions for Alibaba's business to business e-commerce site. The tool is able to automate labor intensive tasks like data collection and trend tracking. [16]
The Qwen-VL series is a line of visual language models that combines a vision transformer with an LLM.[5][17] Alibaba released Qwen2-VL with variants of 2 billion and 7 billion parameters.[18][19][20]
In January 2025, Qwen2.5-VL was released with variants of 3, 7, 32, and 72 billion parameters.[21] All models except the 72B variant are licensed under the Apache 2.0 license.[22] Qwen-VL-Max is Alibaba's flagship vision model as of 2024, and is sold by Alibaba Cloud at a cost of US$0.41 per million input tokens.[23]
Alibaba has released several other model types such as Qwen-Audio and Qwen2-Math.[24] In total, it has released more than 100 open weight models, with its models having been downloaded more than 40 million times.[10] Fine-tuned versions of Qwen have been developed by enthusiasts, such as "Liberated Qwen", developed by San Francisco-based Abacus AI, which is a version that responds to any user request without content restrictions.[25]
On January 29, 2025, Alibaba launched Qwen2.5-Max.[26][27]
On March 24, 2025, Alibaba launched Qwen2.5-VL-32B-Instruct as a successor to the Qwen2.5-VL model. It was released under the Apache 2.0 license.[28][29]
On March 26, 2025, Qwen2.5-Omni-7B was released under the Apache 2.0 license and made available through chat.qwen.ai, as well as platforms like Hugging Face, GitHub, and ModelScope. The Qwen2.5-Omni model accepts text, images, videos, and audio as input and can generate both text and audio as output, allowing it to be used for real-time voice chatting,[30] similar to OpenAI's GPT-4o.[citation needed]
On April 28, 2025, the Qwen3 model family was released,[31] with all models licensed under the Apache 2.0 license. The Qwen3 model family includes both dense (0.6B, 1.7B, 4B, 8B, 14B, and 32B parameters) and sparse models (30B with 3B activated parameters, 235B with 22B activated parameters). They were trained on 36 trillion tokens in 119 languages and dialects.[32]
On September 5, 2025, Alibaba launched Qwen3-Max.[33]
On September 10, 2025, Qwen3-Next was released under the Apache 2.0 license and made available through chat.qwen.ai, as well as platforms like Hugging Face and Model Scope.[34][non-primary source needed]
On September 22, 2025, Qwen3-Omni was release under the Apache 2.0 license and made available through chat.qwen.ai, as well as platforms like Hugging Face and Model Scope. Qwen3-Omni is a mixed/multimodal model that can generate text, images, audio, and video.[35][non-primary source needed]
On 27 January 2026, Qwen3-Max-Thinking was released. The model can generate text, pictures, or video.[36]The Qwen-3.5 model was released on 17 February 2026.[37]
On February 16, 2026, Qwen3.5 and Qwen3.5-Plus were released. Qwen3.5 is open-weights.[38] Several Qwen executives resigned in early 2026, including Lin Junyang,[39] who led develop of Qwen3-Max and Qwen3.5.[40] Amid concern that this could mean a shift away from research and open-source artificial intelligence,[41] Alibaba said it will continue its focus on open source.[40]
| Version | Release date | Ref. |
|---|---|---|
| Tongyi Qianwen | September 2023 | [42] |
| Qwen-VL | August 2023 | [43] |
| Qwen2 | June 2024 | [10] |
| Qwen2-Audio | August 2024 | [44] |
| Qwen2-VL | December 2024 | [18] |
| Qwen2.5 | September 2024 | [45] |
| Qwen2.5-Coder | November 2024 | [46] |
| QvQ | December 2024 | [citation needed] |
| Qwen2.5-VL | January 2025 | [47] |
| QwQ-32B | March 2025 | [48] |
| Qwen2.5-Omni | March 2025 | [citation needed] |
| Qwen3 | April 2025 | [31] |
| Qwen3-Coder (AKA Qwen3-Coder-480B-A35B) Qwen3-Coder-Flash (AKA Qwen3-Coder-30B-A3B) |
July 2025 | [49] |
| Qwen3-Max | September 2025 | [50] |
| Qwen3-Next | September 2025 | [51] |
| Qwen3-Omni | September 2025 | [35] |
| Qwen3-VL | September 2025 | [52] |
| Qwen3-Coder-Next | February 2026 | [53] |
| Qwen3.5 | February 2026 | [54][38] |
| Qwen3.5-Plus | February 2026 | [38] |