logo

GLM-4.7-Flash

GLM-4.7-Flash is a 30B-A3B MoE model. As the strongest model in the 30B class, GLM-4.7-Flash offers a new option for lightweight deployment that balances performance and efficiency.

  • January 22, 2026
  • 0 Reviews

Description

Introduction

GLM-4.7-Flash is a 30B-A3B MoE model. As the strongest model in the 30B class, GLM-4.7-Flash offers a new option for lightweight deployment that balances performance and efficiency.

Performances on Benchmarks

BenchmarkGLM-4.7-FlashQwen3-30B-A3B-Thinking-2507GPT-OSS-20B
AIME 2591.685.091.7
GPQA75.273.471.5
LCB v664.066.061.0
HLE14.49.810.9
SWE-bench Verified59.222.034.0
τ²-Bench79.549.047.7
BrowseComp42.82.2928.3

Evaluation Parameters

Default Settings (Most Tasks)

  • temperature: 1.0
  • top-p: 0.95
  • max new tokens: 131072

For multi-turn agentic tasks (τ²-Bench and Terminal Bench 2), please turn on Preserved Thinking mode.

Terminal Bench, SWE Bench Verified

  • temperature: 0.7
  • top-p: 1.0
  • max new tokens: 16384

τ^2-Bench

  • Temperature: 0
  • Max new tokens: 16384

For τ^2-Bench evaluation, we added an additional prompt to the Retail and Telecom user interaction to avoid failure modes caused by users ending the interaction incorrectly. For the Airline domain, we applied the domain fixes as proposed in the Claude Opus 4.5 release report.

Serve GLM-4.7-Flash Locally

For local deployment, GLM-4.7-Flash supports inference frameworks including vLLM and SGLang. Comprehensive deployment instructions are available in the official Github repository.

vLLM and SGLang only support GLM-4.7-Flash on their main branches.


Features

  • Github