Startup Anysphere’s vibe coding software Cursor has launched Composer, its first in-house proprietary coding large-scale language mannequin (LLM), as a part of its Cursor 2.0 platform replace.
Composer is designed to carry out coding duties shortly and precisely in production-scale environments and represents a brand new step in AI-assisted programming. It’s already utilized by Cursor’s engineering workers in each day growth and has proven maturity and stability.
In response to Cursor, Composer completes most interactions inside 30 seconds whereas sustaining a excessive stage of reasoning energy throughout massive and sophisticated codebases.
The mannequin is alleged to be 4 instances sooner than comparable clever programs and is educated for an “agent” workflow by which autonomous coding brokers collaborate to plan, write, check, and evaluation code.
Beforehand cursors had been supported "vibe coding" — Use AI on high of main proprietary LLMs like OpenAI, Anthropic, Google, and xAI to create or full code primarily based on pure language directions from customers (even these with no growth coaching). These choices will proceed to be accessible to customers.
Benchmark outcomes
Composer performance is benchmarked utilizing: "cursor bench," Inside analysis suite derived from precise developer agent requests. This benchmark measures not solely accuracy but in addition whether or not the mannequin adheres to present abstractions, type conventions, and engineering practices.
On this benchmark, Composer generates 250 tokens per second whereas delivering frontier-level coding intelligence. That is roughly twice as quick as main quick inference fashions and 4 instances as quick as comparable Frontier programs.
The comparability revealed by Cursor teams the fashions into a number of classes. “Finest Open” (together with Qwen Coder, GLM 4.6), “Quick Frontier” (Haiku 4.5, Gemini Flash 2.5), “Frontier 7/2025” (essentially the most highly effective mannequin accessible mid-year), “Finest Frontier” (together with GPT-5 and Claude Sonnet 4.5). Composer matches the intelligence of mid-frontier programs and delivers report manufacturing speeds of all lessons examined.
Mannequin constructed with blended reinforcement studying and professional structure
Cursor analysis scientist Sasha Rush supplied perception into the mannequin’s growth in a put up on Social Community X, describing Composer as a reinforcement studying (RL) mixed-of-experts (MoE) mannequin.
“We used RL to coach a large-scale MoE mannequin that turned excellent at real-world coding and really quick.”
Rush defined that the group co-designed each the Composer and Cursor environments in order that the mannequin might run effectively at manufacturing scale.
“In contrast to different ML programs, you may’t summary a lot away from a full-scale system. We co-designed this venture and Cursor to have the ability to run brokers on the scale we’d like.”
Composer was educated on actual software program engineering duties quite than static datasets. Throughout coaching, the mannequin labored inside a whole codebase utilizing a set of operational instruments akin to file modifying, semantic search, and terminal instructions to resolve complicated engineering issues. Every iteration of the coaching concerned fixing particular challenges, akin to creating code edits, drafting plans, and producing focused directions.
The reinforcement loop has been optimized for each accuracy and effectivity. Composer has discovered to decide on efficient instruments, use parallelism, and keep away from pointless or speculative responses. Over time, the mannequin developed new behaviors akin to working unit exams, fixing linter errors, and performing multi-step code searches autonomously.
This design permits Composer to function inside the similar runtime context as the top consumer, making it extra aligned with real-world coding necessities akin to dealing with model management, dependency administration, and iterative testing.
From prototype to mass manufacturing
Composer’s growth builds on an earlier inside prototype often called Cheetah that Cursor used to discover low-latency inference for coding duties.
“Cheetah was v0 of this mannequin primarily to check velocity,” Rush advised X. [Composer] It is the identical velocity, however a lot smarter. ”
Cheetah’s success in decreasing latency led Cursor to acknowledge that velocity is a key consider developer confidence and ease of use.
Composer considerably improves inference and process generalization whereas sustaining its responsiveness.
Builders who used Cheetah throughout early testing famous that its velocity modified the way in which they labored. One consumer commented, “It is so quick that I all the time keep updated as I work.”
Composer maintains its velocity whereas extending its capabilities to multi-step coding, refactoring, and testing duties.
Integration with Cursor 2.0
Composer is absolutely built-in into Cursor 2.0, a significant replace to the corporate’s agent growth surroundings.
The platform introduces a multi-agent interface that means that you can run as much as eight brokers in parallel, every in an remoted workspace utilizing git worktree or a distant machine.
Inside this method, Composer can act as a number of of those brokers and carry out duties independently or collaboratively. Builders can evaluate outcomes from a number of concurrent executions of brokers and select the very best output.
Cursor 2.0 additionally contains help options that enhance the effectiveness of Composer.
In-Editor Browser (GA) – Allows the agent to run and check code immediately inside the IDE and switch DOM data to the mannequin.
Improved code evaluation – Combination diffs throughout a number of information to extra shortly examine modifications produced by fashions.
Sandbox Terminal (GA) – Isolates agent execution shell instructions for secure native execution.
Voice Mode – Add speech-to-text controls for beginning or managing agent periods.
Whereas these platform updates improve the general Cursor expertise, Composer is positioned because the technical core that allows quick and dependable agent coding.
Infrastructure and coaching system
To coach Composer at scale, Cursor constructed a customized reinforcement studying infrastructure that mixes PyTorch and Ray for asynchronous coaching throughout hundreds of NVIDIA GPUs.
The group developed a specialised MXFP8 MoE kernel and hybrid shard knowledge parallelism to allow large-scale mannequin updates with minimal communication overhead.
This configuration permits Cursor to natively practice fashions at decrease precision with out the necessity for post-training quantization, growing each inference velocity and effectivity.
Composer coaching relied on lots of of hundreds of concurrent sandbox environments, every a self-contained coding workspace, working on the cloud. The corporate tailored its background agent infrastructure to dynamically schedule these digital machines and help the bursty nature of large-scale RL executions.
Company use
Composer efficiency enhancements are supported by infrastructure-level modifications throughout Cursor’s code intelligence stack.
The corporate has optimized the Language Server Protocol (LSP) to hurry up diagnostics and navigation, particularly in Python and TypeScript tasks. These modifications scale back latency when Composer interacts with massive repositories or generates updates for a number of information.
Enterprise customers acquire administrative management over Composer and different brokers via group guidelines, audit logging, and sandbox enforcement. Cursor’s Groups and Enterprise tiers additionally help using pooled fashions, SAML/OIDC authentication, and analytics to watch agent efficiency throughout your group.
Pricing for particular person customers ranges from Free (Pastime) to Extremely ($200 monthly), with expanded utilization limits for Professional+ and Extremely subscribers.
Enterprise pricing for Groups begins at $40 per consumer monthly, with enterprise agreements providing customized utilization and compliance choices.
Composer’s position within the evolving AI coding surroundings
Composer is totally different from different AI growth assistants like GitHub Copilot and Replit’s Agent due to its deal with velocity, reinforcement studying, and integration with reside coding workflows.
Reasonably than performing as a passive suggestion engine, Composer is designed for steady, agent-driven collaboration, the place a number of autonomous programs work together immediately with a venture’s codebase.
This model-level specialization, or coaching an AI to operate inside the real-world surroundings by which it operates, is a crucial step towards sensible autonomous software program growth. Reasonably than being educated solely on textual content knowledge or static code, Composer is educated inside a dynamic IDE that mirrors your manufacturing surroundings.
Rush defined that this strategy is crucial to reaching real-world reliability. The mannequin not solely learns how you can generate code, but in addition how you can combine, check, and enhance it in context.
What it means for enterprise builders and Vibe coding
With Composer, Cursor is introducing greater than only a quick mannequin. We’re deploying AI programs which are optimized for real-world use and constructed to work inside the similar instruments that builders already depend on.
The mixture of reinforcement studying, mixed-expert design, and tight product integration provides Composer a sensible benefit in velocity and responsiveness that units it aside from general-purpose language fashions.
Whereas Cursor 2.0 gives the infrastructure for multi-agent collaboration, Composer is the core innovation that allows these workflows.
It is the primary coding mannequin constructed particularly for production-level coding for brokers, and gives an early glimpse into what day-to-day programming can seem like when human builders and autonomous fashions share the identical workspace.


