# Kimi-Linear-REAP-35B-A3B-Instruct **Repository Path**: hf-models/Kimi-Linear-REAP-35B-A3B-Instruct ## Basic Information - **Project Name**: Kimi-Linear-REAP-35B-A3B-Instruct - **Description**: Mirror of https://huggingface.co/cerebras/Kimi-Linear-REAP-35B-A3B-Instruct - **Primary Language**: Unknown - **License**: Not specified - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2025-11-09 - **Last Updated**: 2025-11-09 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README --- language: - en library_name: transformers tags: - glm - MOE - pruning - compression license: mit name: cerebras/Kimi-Linear-REAP-35B-A3B-Instruct description: > This model was obtained by uniformly pruning 30% of experts in Kimi-Linear-48B-A3B-Instruct using the REAP method. readme: > https://huggingface.co/cerebras/Kimi-Linear-REAP-35B-A3B-Instruct/main/README.md pipeline_tag: text-generation base_model: - moonshotai/Kimi-Linear-48B-A3B-Instruct ---

๐“Œณ REAP๐“Œณ the Experts: Why Pruning Prevails for One-Shot MoE Compression
REAP

# Kimi-Linear-REAP-35B-A3B-Instruct ## โœจ Highlights Introducing **Kimi-Linear-REAP-35B-A3B-Instruct**, a **memory-efficient compressed variant** of Kimi-Linear-48B-A3B-Instruct that maintains near-identical performance while being **30% lighter**. This model was created using **REAP (Router-weighted Expert Activation Pruning)**, a novel expert pruning method that selectively removes redundant experts while preserving the router's independent control over remaining experts. Key features include: - **Near-Lossless Performance**: Maintains almost identical accuracy on code generation, agentic coding, and function calling tasks compared to the full 48B model - **30% Memory Reduction**: Compressed from 48B to 35B parameters, significantly lowering deployment costs and memory requirements - **Preserved Capabilities**: Retains all core functionalities including code generation, math & reasoning and long-context question-answering capabilities. - **Drop-in Compatibility**: Works with vanilla vLLM - no source modifications or custom patches required - **Optimized for Real-World Use**: Particularly effective for resource-constrained environments, local deployments, and academic research --- ## ๐Ÿ“‹ Model Overview **Kimi-Linear-REAP-35B-A3B-Instruct** has the following specifications: - **Base Model**: Kimi-Linear-48B-A3B-Instruct - **Compression Method**: REAP (Router-weighted Expert Activation Pruning) - **Compression Ratio**: 30% expert pruning - **Type**: Sparse Mixture-of-Experts (SMoE) Causal Language Model - **Number of Parameters**: 35B total, 3B activated per token - **Number of Layers**: 27 - **Number of Attention Heads**: 32 - **Number of Experts**: 180 (uniformly pruned from 256) - **Number of Activated Experts**: 8 per token - **Context Length**: 1,048,576 tokens - **License**: MIT --- ## ๐Ÿ“Š Evaluations
Benchmark Kimi-Linear-48B-A3B-Instruct Kimi-Linear-REAP-35B-A3B-Instruct
Compression โ€” 30%
Coding
HumanEval 86.6 87.2
HumanEval+ 82.3 81.1
MBPP 84.1 83.6
MBPP+ 66.9 69.3
Reasoning
LiveCodeBench (v6, 25.01 - 25.05) 27.6 30.2
AIME25 30.0 40.0
MATH-500 81.8 80.8
GSM8k 87.3 85.8
Long-context QA
LongBench v2 36.8 37.2
FRAMES 55.7 52.3
๐ŸŸฉ *This checkpoint maintains almost identical performance while being 30% lighter.* LongBench v2 and FRAMES are evaluated at 128K maximal input context length (truncation in the middle is done for samples exceeding that). Metric reported for FRAMES is LLM-as-a-judge accuracy with GPT-4.1 judge. Note that FRAMES is a benchmark more reliant on model's internal factual knowledge compared to LongBench v2, so it shows a higher accuracy drop from expert pruning. For more details on the evaluation setup, refer to the [REAP arXiv preprint](https://arxiv.org/abs/2510.13999). --- ## ๐Ÿš€ Deployment You can deploy the model directly using the **latest vLLM** (that supports Kimi-Linear), no source modifications or custom patches required. ```bash vllm serve cerebras/Kimi-Linear-REAP-35B-A3B-Instruct \ --tensor-parallel-size 4 \ --tool-call-parser kimi_k2 \ --enable-auto-tool-choice ``` If you encounter insufficient memory when running this model, you might need to set a lower value for `--max-num-seqs` flag (e.g. set to 64). ## ๐Ÿงฉ Model Creation This checkpoint was created by applying the **REAP (Router-weighted Expert Activation Pruning)** method uniformly across all Mixture-of-Experts (MoE) blocks of **Kimi-Linear-48B-A3B-Instruct**, with a **30% pruning rate**. ### How REAP Works REAP selects experts to prune based on a novel **saliency criterion** that considers both: - **Router gate values**: How frequently and strongly the router activates each expert - **Expert activation norms**: The magnitude of each expert's output contributions This dual consideration ensures that experts contributing minimally to the layer's output are pruned, while preserving those that play critical roles in the model's computations. ### Key Advantages - **One-Shot Compression**: No fine-tuning required after pruning - the model is immediately ready for deployment - **Preserved Router Control**: Unlike expert merging methods, REAP maintains the router's independent, input-dependent control over remaining experts, avoiding "functional subspace collapse" - **Generative Task Superiority**: REAP significantly outperforms expert merging approaches on generative benchmarks (code generation, creative writing, mathematical reasoning) while maintaining competitive performance on discriminative tasks ๐Ÿ“š For more details, refer to the following resources: - [๐Ÿงพ arXiv Preprint](https://arxiv.org/abs/2510.13999) - [๐Ÿงพ REAP Blog](https://www.cerebras.ai/blog/reap) - [๐Ÿ’ป REAP Codebase (GitHub)](https://github.com/CerebrasResearch/reap) --- ## โš–๏ธ License This model is derived from **[`moonshotai/Kimi-Linear-48B-A3B-Instruct`](https://huggingface.co/moonshotai/Kimi-Linear-48B-A3B-Instruct)** and distributed under the **MIT license**. --- ## ๐Ÿงพ Citation If you use this checkpoint, please cite the REAP paper: ```bibtex @article{lasby-reap, title={REAP the Experts: Why Pruning Prevails for One-Shot MoE compression}, author={Lasby, Mike and Lazarevich, Ivan and Sinnadurai, Nish and Lie, Sean and Ioannou, Yani and Thangarasa, Vithursan}, journal={arXiv preprint arXiv:2510.13999}, year={2025} } ```