Download Cursor

Cursor Security — How Your Code Stays Private and Protected

Security is not a feature you enable in Cursor — it is the default state. Privacy Mode ensures your source code is never stored on Cursor servers. SOC 2 Type II compliance means an independent auditor has verified the controls. Encryption protects data in transit and at rest. Local indexing keeps your codebase on your machine. And team admin controls let organizations enforce policies across every developer seat.

Whether you are an individual developer working on a side project or an enterprise engineering team handling regulated data, Cursor provides the security architecture to meet your requirements without compromising the AI-powered editing experience.

Cursor Security Architecture — April 2026

  • Privacy Mode: code snippets processed in memory and immediately discarded — never stored, logged, or used for training
  • SOC 2 Type II certified with independent audit verification of security controls
  • TLS 1.3 encryption for all data in transit; AES-256 encryption for data at rest on Cursor infrastructure
  • Local codebase indexing — the @codebase semantic index never leaves the developer's machine
  • Team admin controls: enforce Privacy Mode, restrict models, require SSO, set session timeouts, audit AI usage
  • Zero training policy: customer code is never used to train or fine-tune any AI model

Privacy Mode — Your Code Never Leaves Your Control

The foundation of Cursor's security model is Privacy Mode, a zero-retention architecture for code processing.

How Privacy Mode Works

When Privacy Mode is enabled, every code snippet sent to Cursor's servers for AI processing is handled in memory only. The snippet is forwarded to the AI model provider (Anthropic, OpenAI, or Google), the response is returned to the editor, and both the request and response are immediately discarded from Cursor's infrastructure. No logs. No cache. No database records. The code exists on Cursor's servers only for the duration of the API call — typically under two seconds.

Privacy Mode is available on every plan, including the free Hobby tier. It is enabled by default for Teams and Enterprise accounts. Individual users can toggle it in Settings > Privacy. Once enabled, it applies to all AI features: Tab completions, Composer, agent mode, and @codebase chat queries.

Data Processing Agreements

Cursor maintains explicit data processing agreements with all AI model providers. These agreements prohibit Anthropic, OpenAI, and Google from using any code or prompts sent through Cursor for model training, fine-tuning, or any purpose other than generating the immediate response. This is a contractual obligation, not just a policy — it is enforceable and audited as part of Cursor's SOC 2 compliance. Enterprise customers can request copies of these DPAs for their legal review.

The zero-training guarantee applies regardless of whether Privacy Mode is enabled. Privacy Mode adds the additional layer of zero retention on Cursor's own infrastructure. Together, these protections mean your code is processed once, returned, and forgotten — by both Cursor and its AI providers.

Security Features by Plan

Every Cursor plan includes baseline security. Teams and Enterprise tiers add organizational controls.

Security FeatureHobby (Free)Pro ($20/mo)Pro+ ($60/mo)Teams ($40/user)Enterprise
Privacy ModeAvailableAvailableAvailableEnforced by defaultEnforced by default
TLS 1.3 in TransitYesYesYesYesYes
AES-256 at RestYesYesYesYesYes
Local Codebase IndexYesYesYesYesYes
Zero Training PolicyYesYesYesYesYes
SOC 2 Type IICoveredCoveredCoveredCoveredCovered
SSO / SAMLYesYes
Admin Policy EnforcementYesYes
AI Usage Audit LogsYesYes
Model Restriction ControlsYesYes
Dedicated InfrastructureYes
Custom DPAOn requestYes

Encryption, Compliance, and Infrastructure

Technical details on how Cursor protects data at every layer of the stack.

Encryption Standards

All communication between the Cursor editor and backend servers uses TLS 1.3 with forward secrecy. Data at rest on Cursor infrastructure is encrypted with AES-256. API keys and authentication tokens stored on the developer's machine use the operating system keychain (Keychain Access on macOS, Credential Manager on Windows, libsecret on Linux) — never plain text configuration files.

SOC 2 Type II Compliance

Cursor's SOC 2 Type II report covers security, availability, and confidentiality trust service criteria. The audit verifies that access controls, change management, incident response, and data handling procedures operate effectively over a sustained period. The report is available to enterprise customers and prospects under NDA. Cursor undergoes annual re-certification to maintain compliance as the product and infrastructure evolve.

Local-First Architecture

The @codebase semantic index is built and stored locally on the developer's machine. Project files are scanned, embedded, and indexed without uploading to any server. Queries against the index execute locally in milliseconds. Only the specific code context needed for an AI request is sent to the server — and only when the developer initiates a completion, Composer edit, or chat query. Idle code stays on disk, untouched. This aligns with the security guidance published by NIST for minimizing data exposure.

Team and Enterprise Security Controls

Organizational tools for enforcing security policies across engineering teams.

Admin Dashboard

Teams and Enterprise administrators access a centralized dashboard for managing security policies. From the dashboard, admins can enforce Privacy Mode for all team members (preventing individual developers from disabling it), restrict which AI models are available to the team, configure SSO with SAML 2.0 providers, set session timeout durations, and review AI usage logs that show which features each developer uses and how frequently.

These controls are designed for organizations operating under compliance frameworks like SOC 2, HIPAA, or GDPR where data handling policies must be enforced uniformly rather than left to individual discretion. The admin dashboard is available on the Teams plan ($40/user/month) and the Enterprise tier with custom pricing.

Audit and Monitoring

AI usage audit logs record every AI interaction across the team — completions, Composer edits, agent mode sessions, and chat queries. Logs include timestamps, the feature used, the model selected, and the developer's identity. They do not include the actual code content, preserving developer privacy while giving security teams the visibility they need for compliance reporting.

Enterprise customers can integrate audit logs with their existing SIEM (Security Information and Event Management) systems via webhook or API export. This enables automated alerting — for example, flagging unusual spikes in agent mode usage or attempts to use restricted models. The Cursor documentation covers audit log configuration and integration patterns.

Frequently Asked Questions About Cursor Security

Direct answers about Privacy Mode, compliance, encryption, and data handling in Cursor.

Does Cursor store my source code on its servers?

No. With Privacy Mode enabled, code snippets are processed in memory and immediately discarded. They are never stored, logged, cached, or used for model training. Privacy Mode is available on all plans including the free Hobby tier and is enforced by default on Teams and Enterprise accounts.

Is Cursor SOC 2 compliant?

Yes. Cursor has achieved SOC 2 Type II compliance covering security, availability, and confidentiality. An independent auditor verifies that controls operate effectively over a sustained period. The report is available to enterprise customers under NDA. Annual re-certification ensures ongoing compliance.

How does Cursor encrypt data?

TLS 1.3 with forward secrecy for all data in transit. AES-256 for data at rest on Cursor infrastructure. Local credentials stored in the OS keychain (Keychain Access, Credential Manager, or libsecret). The local codebase index is protected by the operating system's disk encryption if enabled.

Can team admins enforce security policies?

Yes. On Teams ($40/user/month) and Enterprise plans, admins can enforce Privacy Mode, restrict available models, require SSO, set session timeouts, and review AI usage audit logs. These controls are managed from a centralized admin dashboard.

Does Cursor use my code to train AI models?

No. Cursor does not use customer code for training or fine-tuning. Data processing agreements with Anthropic, OpenAI, and Google contractually prohibit using code sent through Cursor for any purpose other than generating the immediate response. This applies to all plans regardless of Privacy Mode status.