Category Models

Kimi-K2.5

Kimi-K2.5

Kimi-K2.5 is Moonshot AI’s current flagship open model for people who need one system to handle coding, visual inputs, long-context reasoning, and agent-style execution. Officially released on January 27, 2026, it extends Kimi K2 with native multimodality, a 256K context…

Kimi K2-Instruct

Kimi K2-Instruct

Kimi K2-Instruct is an instruction-tuned large language model in Moonshot AI’s Kimi K2 series, distinguished by its massive scale and developer-focused design. As part of Moonshot’s lineup, it represents the instruction-tuned variant of the Kimi K2 model, optimized for following…

Kimi-VL

Kimi-VL

Kimi-VL is an advanced vision-language AI model developed by Moonshot AI. It seamlessly integrates visual and textual understanding, allowing developers to build applications that can “see” images and “read” text at the same time. Unlike many large models that require…

Kimi-Researcher

Kimi-Researcher

Kimi-Researcher is an autonomous AI research agent within the Moonshot AI Kimi ecosystem, designed to help developers automate knowledge-intensive tasks. Unlike a basic chatbot, it behaves like a “thinking” research assistant that can browse the web, read documents, and even…

Kimi K2

Kimi K2

Kimi K2 is a large-scale Mixture-of-Experts (MoE) language model developed by Moonshot AI and released in mid-2025 as part of the Kimi ecosystem. According to the official model documentation, it features approximately 1 trillion total parameters with around 32 billion…

Kimi K1

Kimi K1

Kimi K1 is a large language model (LLM) from Moonshot AI, a Beijing-based AI startup known for pushing the limits of context length and AI capabilities. Kimi K1 was an early milestone in Moonshot’s Kimi model family and is best…

Kimi K1.5

Kimi K1.5

Kimi K1.5 was an important earlier-generation model in Moonshot AI’s Kimi family, introduced in January 2025 as a multimodal reasoning model rather than a current default backbone. In its official research release, Moonshot described K1.5 as an “o1-level multi-modal model,”…