Download Cursor

Cursor @Codebase Chat — AI That Reads, Indexes, and Understands Your Entire Project

Stop copying code into ChatGPT. Cursor @codebase indexes every file in your project using embeddings — vector representations that capture the semantic meaning of your code. Ask a question in the chat panel with @codebase and the AI searches your entire repository for relevant functions, types, configurations, and comments. Answers include specific file references you can click to navigate directly to the source. No manual file selection, no context window limits, no copy-pasting.

The index builds locally on your machine. Your code never leaves your workstation during indexing. When you edit a file, the index updates incrementally — no full re-indexing required. @codebase works with projects of any size, from small libraries to monorepos with thousands of files. Combine it with @file for specific references, @symbol for function lookups, or @folder to scope questions to a directory.

Download Cursor Free View Pricing
Cursor @codebase chat panel showing AI answering a project-specific question with file references and code snippets

Cursor @Codebase Chat Overview — April 2026

  • Indexes every file in your project using embeddings for semantic code understanding
  • Context-aware Q&A: ask questions and get answers that reference your actual functions, types, and configs
  • @file, @symbol, and @folder references for precise context control in chat queries
  • Semantic search finds relevant code by meaning — not just keyword matching
  • Local indexing: your code stays on your machine during the embedding generation process
  • Incremental index updates: edits reflect in the index without full re-indexing
  • Works with Claude Sonnet/Opus, GPT-4o/5.4, and Gemini — auto mode selects the best model

How @Codebase Indexing and Semantic Search Work

Cursor builds a semantic map of your project that goes beyond file names and symbol tables. The AI understands what your code does, not just what it is called.

Cursor @codebase indexing process showing embeddings being generated for project files

Embeddings-Based Project Indexing

When you open a project in Cursor, the editor generates embeddings for every file. Embeddings are high-dimensional vector representations that encode the semantic meaning of code — what a function does, what data a type represents, how modules relate to each other. This is fundamentally different from a text search index. A text search finds files containing the word "authenticate." An embeddings search finds all code related to authentication — login flows, token validation, session management, permission checks — even if those files never use the word "authenticate."

The index builds in the background without blocking your editor. Initial indexing for a 10,000-file project takes 30 to 60 seconds depending on your hardware. After the initial build, the index updates incrementally — only re-embedding files you modify. The embeddings are stored locally in your project's .cursor directory. Your source code is never uploaded to external servers during indexing. This local-first approach means @codebase works in air-gapped environments and with proprietary codebases.

Cursor chat panel showing @codebase semantic search results with file references and relevance scores

Context-Aware Question Answering

Type a question in the chat panel and prefix it with @codebase. Cursor searches the embeddings index for the most relevant code sections, retrieves them, and passes them as context to the AI model along with your question. The model reads your actual code — your variable names, your function signatures, your error handling patterns — and generates an answer grounded in your project rather than generic programming knowledge.

For example, asking "@codebase how does the payment processing flow work" triggers a semantic search across all files. Cursor might retrieve your checkout controller, Stripe integration module, order model, webhook handler, and retry logic. The AI then explains how these pieces connect — from cart submission through payment capture to order confirmation — citing specific functions and line numbers. Research from Stanford's AI Lab demonstrates that retrieval-augmented generation with code embeddings improves answer accuracy by 45% compared to models that rely on training data alone.

Cursor chat showing @file and @symbol references with targeted context for specific code questions

File References, Symbol Lookup, and Folder Scoping

@codebase is the broadest context command — it searches everything. For more targeted questions, Cursor provides three additional reference types. @file lets you specify exact files: "@file:auth.ts @file:middleware.ts explain how these two files handle authentication." @symbol references specific functions or classes: "@symbol:processPayment trace this function's call chain." @folder scopes the search to a directory: "@folder:src/api which endpoints are missing rate limiting?"

You can combine references freely. "@codebase @file:schema.prisma what tables reference the User model?" gives the AI both the full project context and explicit access to your database schema. Each reference type appears as a clickable tag in the chat panel. The AI's response includes file path references — click any reference to jump directly to that file and line in the editor. The Cursor documentation covers all reference types and keyboard shortcuts for quick access.

Codebase Chat Features Comparison

How Cursor @codebase compares to other approaches for asking AI questions about code.

CapabilityCursor @CodebaseChatGPT / Claude WebGitHub Copilot Chat
Project AwarenessFull project via embeddings indexNone (paste code manually)Current file + limited context
Semantic SearchEmbeddings-based (meaning, not keywords)NoneLimited
File References in AnswersClickable links to file + lineNoneLimited
@file / @symbol / @folderFull supportNone@file only
Index LocationLocal (code stays on machine)N/ACloud-based
Incremental UpdatesYes (re-embed changed files only)N/AN/A
Model SelectionClaude, GPT, Gemini, AutoSingle model per productOpenAI models
Monorepo SupportThousands of files indexedContext window limitLimited context
Integrated in EditorYes (side panel)No (separate browser tab)Yes (side panel)

What Developers Ask @Codebase

Real question categories that developers use @codebase for daily — from onboarding onto new projects to debugging production issues.

Codebase Onboarding

"How does the authentication system work?" "What is the data flow from API request to database?" "Where are the environment variables configured?" New team members use @codebase to understand unfamiliar projects in hours instead of days. The AI reads the actual code and explains how components connect — architecture diagrams written from your source, not generic patterns. Every answer includes file references so you can verify and explore further.

Debugging and Investigation

"Where is the error 'invalid token format' thrown?" "What functions call the processOrder method?" "Which middleware runs before the /api/users endpoint?" @codebase traces execution paths, finds error origins, and maps call chains across your project. Unlike grep, semantic search understands intent — asking about "payment failures" finds retry logic, error handlers, and webhook processors even if they do not contain the word "payment."

Architecture Review

"Which modules have circular dependencies?" "What database tables are missing indexes for common queries?" "Are there any API endpoints without input validation?" @codebase performs architectural analysis across your entire project. It identifies patterns, anti-patterns, and gaps that would take hours to find through manual code review. Combine with Composer to fix identified issues across multiple files in one operation.

Ask Your Codebase Anything with Cursor

Download Cursor free for macOS, Windows, or Linux. Open the chat panel, type @codebase, and ask your first question. The index builds automatically in the background — no configuration required. The Hobby plan includes limited chat requests. Pro at $20/month provides a credit pool for regular usage with premium models. Import your VS Code settings in one click and start asking questions about your project within minutes.

Download Cursor Explore Composer

Frequently Asked Questions About Cursor @Codebase Chat

Technical details about indexing, semantic search, and chat capabilities.

What is Cursor @codebase chat?

@codebase is Cursor's project-wide AI chat. It indexes every file using embeddings for semantic understanding. Ask questions in the chat panel with @codebase and get answers that reference your actual functions, types, and configurations. No manual copy-pasting or file selection needed.

How does @codebase indexing work?

Cursor generates embeddings — vector representations of your code's semantic meaning — for every file. The index builds locally in 30-60 seconds for large projects and updates incrementally as you edit. Your code never leaves your machine during indexing. Embeddings are stored in your project's .cursor directory.

Can I reference specific files in Cursor chat?

Yes. Use @file for specific files, @symbol for functions or classes, and @folder for directories. Combine them freely: "@codebase @file:schema.prisma what references the User model?" Each reference appears as a clickable tag. Responses include file path links to navigate directly to the source.

How does Cursor codebase chat compare to ChatGPT for code questions?

ChatGPT answers generic programming questions without knowing your project. Cursor @codebase reads your actual code via embeddings and answers with specific file references. Ask "how does auth work in this project" and Cursor finds your auth middleware, token logic, and route guards. ChatGPT gives generic authentication patterns.

Is codebase chat available on the free Hobby plan?

Yes. Hobby includes indexing and limited chat requests. Pro ($20/mo) adds a credit pool for frequent usage. Auto mode is unlimited on all paid plans. The index builds locally on all plans. See pricing for full tier details.