Qwen
Family of large language models by Alibaba
From Wikipedia, the free encyclopedia
Qwen (also known as Tongyi Qianwen, Chinese: 通义千问; pinyin: Tōngyì Qiānwèn) is a family of large language models developed by Alibaba Cloud. Many Qwen models are distributed under the free and open-source Apache 2.0 license, the source-available Qwen License,[1] or the non-commercial Qwen Research License;[2] other proprietary Qwen models are served through Alibaba Cloud.[3]
April 18, 2026
Qwen3.6-35B-A3B /
April 15, 2026
Qwen3.6-27B /
April 22, 2026
Qwen3.6-Plus /
April 1, 2026
| Qwen | |
|---|---|
Screenshot Screenshot of an example of a Qwen 3 answer describing Wikipedia, with the "Thinking" feature enabled | |
| Developer | Alibaba Cloud |
| Initial release | April 2023 |
| Stable release | Qwen3.6-Max / April 18, 2026 Qwen3.6-35B-A3B / April 15, 2026 Qwen3.6-27B / April 22, 2026 Qwen3.6-Plus / April 1, 2026 |
| Written in | Python |
| Operating system | |
| Type | Large language model, chatbot |
| License | Various; see § List of models |
| Website | qwen |
| Repository | github |
| Qwen | |||||||
|---|---|---|---|---|---|---|---|
| Tongyi Qianwen | |||||||
| Traditional Chinese | 通義千問 | ||||||
| Simplified Chinese | 通义千问 | ||||||
| Literal meaning | to comprehend the meaning, [and to answer] a thousand kinds of questions | ||||||
| |||||||
Models
Qwen
Alibaba launched a beta of Qwen in April 2023 under the name Tongyi Qianwen, then opened it for public use in September 2023 after regulatory clearance.[4][5]
The model's architecture was based on the Llama architecture developed by Meta AI.[6][7] In December 2023, it released its 72B and 1.8B models for download, while Qwen 7B weights were released in August.[8][9]
Qwen2
Qwen2 was released in June 2024, and in September it released some of its models with open weights, while keeping its most advanced models proprietary.[10][11] Qwen2 contains both dense and sparse models.[12]
In November 2024, QwQ-32B-Preview, a model focusing on reasoning similar to OpenAI's o1, was released under the Apache 2.0 License, although only the weights were released, not the dataset or training method.[13][14] QwQ has a 32K token context length and performs better than o1 on some benchmarks.[15] It was also in November 2024 that the Accio application was launched.[16] Accio is an AI native application that is built upon Qwen and is used to generate market insights and answer sourcing questions for Alibaba's business to business e-commerce site. The tool is able to automate labor intensive tasks like data collection and trend tracking. [17]
The Qwen-VL series is a line of visual language models that combines a vision transformer with an LLM.[6][18] Alibaba released Qwen2-VL with variants of 2 billion and 7 billion parameters.[19][20][21]
In January 2025, Qwen2.5-VL was released with variants of 3, 7, 32, and 72 billion parameters.[22] All models except the 72B variant are licensed under the Apache 2.0 license.[23] Qwen-VL-Max is Alibaba's flagship vision model as of 2024, and is sold by Alibaba Cloud at a cost of US$0.41 per million input tokens.[24]
Alibaba has released several other model types such as Qwen-Audio and Qwen2-Math.[25] In total, it has released more than 100 open weight models, with its models having been downloaded more than 40 million times.[11] Fine-tuned versions of Qwen have been developed by enthusiasts, such as "Liberated Qwen", developed by San Francisco-based Abacus AI, which is a version that responds to any user request without content restrictions.[26]
On January 29, 2025, Alibaba launched Qwen2.5-Max.[27][28]
On March 24, 2025, Alibaba launched Qwen2.5-VL-32B-Instruct as a successor to the Qwen2.5-VL model. It was released under the Apache 2.0 license.[29][30]
On March 26, 2025, Qwen2.5-Omni-7B was released under the Apache 2.0 license and made available through chat.qwen.ai, as well as platforms like Hugging Face, GitHub, and ModelScope. The Qwen2.5-Omni model accepts text, images, videos, and audio as input and can generate both text and audio as output, allowing it to be used for real-time voice chatting.[31]
Qwen3
On April 28, 2025, the Qwen3 model family was released,[32] with all models licensed under the Apache 2.0 license. The Qwen3 model family includes both dense and MoE models. The sizes of the dense models include 0.6B, 1.7B, 4B, 8B, 14B, and 32B, and the MoE models include 30B-A3B (30B with 3B activated parameters) and 235B-A22B (235B with 22B activated parameters)[33]. They were trained on 36 trillion tokens in 119 languages and dialects.[34]
The Qwen3 collection include[33]:
- Qwen3, which supports switching between thinking and non-thinking mode
- Qwen3 Base, the pretrained base model
- Qwen3 Instruct, which supports only non-thinking mode
- Qwen3 Thinking, which supports only thinking mode
In additional to open-weights models, Qwen3 also includes Qwen3-Max, a proprietary large model with over 1T parameters, trained with about 36T tokens, that is available through API,[35] and Qwen3-Max-Thinking, its reasoning variant that can generate text, pictures, or video.[36]
Qwen3.5 and Qwen3.6
On February 16, 2026, Qwen3.5 and Qwen3.5-Plus were released. Qwen3.5 is open-weights.[37] [38]
While previous models have been open source, Qwen3.5-Omni and Qwen3.6-Plus were released in April 2026 as proprietary; access to these tools is limited to the chatbots' websites and the Alibaba cloud platform.[39] The Qwen3.6-35B-A3B model was released under the Apache 2.0 license in the same month.[40] Alibaba's Qwen 3.5 is designed to complete complex tasks and the company claims it can beat U.S. rival models on several objective metrics, including speed and cost. It can independently take actions across mobile and desktop apps, moving faster and doing more with the same settings.[41]
List of models
| Name | Release date | License | Ref. |
|---|---|---|---|
| Qwen (Tongyi Qianwen) | September 2023 | 72B, 14B, 7B: Tongyi Qianwen 1.8B: Tongyi Qianwen Research |
[42][43] |
| Qwen-VL | August 2023 | Max, Plus: Proprietary Base, Chat: Tongyi Qianwen |
[44][45][46] |
| Qwen2 | June 2024 | 72B: Tongyi Qianwen 57B-A14B, 7B, 1.5B, 0.5B: Apache 2.0 |
[11][47][48] |
| Qwen2-Audio | August 2024 | Apache 2.0 | [49][50] |
| Qwen2-VL | December 2024 | 72B: Qwen 7B, 2B: Apache 2.0 |
[19][51][52] |
| Qwen2.5 | September 2024 | 72B: Qwen 32B, 14B, 7B, 1.5B, 0.5B: Apache 2.0 3B: Qwen Research |
[53][54] |
| Qwen2.5-Coder | November 2024 | 32B, 14B, 7B, 1.5B, 0.5B: Apache 2.0 3B: Qwen Research |
[55][56] |
| QvQ | December 2024 | Qwen | [57][58] |
| Qwen2.5-VL | January 2025 | 72B: Qwen 32B, 7B: Apache 2.0 3B: Qwen Research |
[59][60][61][62][63] |
| QwQ-32B | March 2025 | Apache 2.0 | [64][65] |
| Qwen2.5-Omni | March 2025 | 7B: Apache 2.0 3B: Qwen Research |
[66][67][68] |
| Qwen3 | April 2025 | Apache 2.0 | [69][70] |
| Qwen3-Coder (Qwen3-Coder-480B-A35B) Qwen3-Coder-Flash (Qwen3-Coder-30B-A3B) |
July 2025 | [71][72][73] | |
| Qwen3-Max | September 2025 | Proprietary | [74] |
| Qwen3-Next | September 2025 | Apache 2.0 | [2][75] |
| Qwen3-Omni | September 2025 | [76][77] | |
| Qwen3-VL | September 2025 | [78][79] | |
| Qwen3-Coder-Next | February 2026 | [80][81] | |
| Qwen3.5 | February 2026 | [82][83][84] | |
| Qwen3.5-Plus | February 2026 | Proprietary | [83] |
| Qwen3.6-Plus | April 2026 | [85] | |
| Qwen3.6 (Qwen3.6-35B-A3B) | April 2026 | Apache 2.0 | [40][86] |
History
In a January 2026 update of the Qwen mobile application, the Alibaba Group connected the chatbot to the company’s ecosystem, starting with food-service delivery. The company announced plans to allow users to assign ecosystem-related-tasks to more company platforms such as Taobao and Fliggy and to help consumers with errands, including phone calls and document processing[87].
Former Qwen AI model division head, Lin Junyang, resigned in March 2026 after the release of Qwen3.5 and Qwen3.5-Plus[88], becoming the latest of three executives to have left Alibaba this year. Amid concern that this could mean a shift away from research and open-source artificial intelligence,[89] Alibaba said it will continue its focus on open source.[90]
Later the same month, a company announcement revealed the formation of a new AI business unit, Alibaba Token Hub[91]. The new unit will supervise AI-related work, led by Alibaba CEO Eddie Wu and includes Tongyi Large Model Business Unit, MaaS Business Line, Qwen, Wukong, and AI Innovation[92]. The Token Hub is further supported by senior leaders Zhou Jingren, as chief AI architect, and CTO Wu Zeming. Led by Jingren, Alibaba’s Tongyi Large Model Business Unit, formerly known as Tongyi Laboratory, was restructured to specialize in the development of Qwen AI models.[93]
Reception
There are over 200,000 variations of Qwen's open-source AI models on the Hugging Face’s model list; Qwen3-VL-2B-Instruct has surpassed 18 million downloads globally.[94] The platform contains many small, specialized Qwen derivatives by third-party developers. Models can also be found on Alibaba’s ModelScope, which was created for greater open-source accessibility due to China’s restriction of Hugging Face in 2022.[95]
The U.S.-China Economic and Security Review Commission reported that the Chinese approach in developing open-source AI models, such as Qwen, has been critical to China's ability to overcome constraints in compute and to expand its capabilities in real-world data curation. This enhanced capacity in data curation is perceived to be important to China's ability to integrate AI into research, manufacturing, and robotics sectors.[95]