Qwen3.6 35B A3B is a native vision-language MoE model with hybrid attention. Compared to Qwen3.5 35B A3B, Alibaba reports stronger agentic coding, mathematical and code reasoning, and better spatial understanding (including object localization and detection).
Added Apr 17, 2026
Context Window
262.1K
Max Output
16.4K
Input Price
$0.20/1M
Output Price
$1.00/1M
Capabilities
Performance metrics and benchmarks
Sourced from Artificial Analysis.
Intelligence Index
31.5
Coding Index
17.6
Agentic Index
58.3
GPQA Diamond
Graduate-level scientific reasoning
81.7%
Better than 83% of models compared
HLE
Humanity's Last Exam
12.5%
Better than 76% of models compared
IFBench
Instruction-following benchmark
36.2%
Better than 31% of models compared
T²-Bench Telecom
Conversational AI agents in dual-control scenarios
85.1%
Better than 79% of models compared
AA-LCR
Long context reasoning evaluation
56.7%
Better than 74% of models compared
GDPval-AA
Economically valuable tasks
39.9%
Better than 85% of models compared
CritPt
Research-level physics reasoning
0.3%
Better than 64% of models compared
SciCode
Python programming for scientific computing
1.3%
Better than 1% of models compared
Terminal-Bench Hard
Agentic coding and terminal use
25.8%
Better than 72% of models compared
AA-Omniscience Accuracy
Proportion of correctly answered questions
18.9%
Better than 55% of models compared
AA-Omniscience Hallucination Rate
Rate of incorrect answers among non-correct responses
49.7%
Better than 89% of models compared
Last updated May 11, 2026
Artificial Analysis