Compliance & auditability
This page defines how AI systems are made auditable, explainable, and policy-aligned. Cursor agents should treat these principles as constraints when generating or modifying AI behavior.
Deterministic memory architecture
Memory is not an opaque black box. It is a structured set of stores with clear retention, indexing, and access policies.
- Session state: short-lived context for workflows, scoped to a specific process.
- Long-term memory: curated knowledge artifacts with explicit governance.
- Evidence store: immutable records of model inputs, outputs, and decisions.
Memory constraints
- All reads and writes are attributable to a system, user, or policy decision.
- Access patterns can be reconstructed from logs for compliance review.
- Retention policies are explicit, versioned, and aligned with regulatory requirements.
Audit trails
Every material AI decision must be reconstructible after the fact. This section describes what must be logged and how it is structured.
- Event logs: timestamped records of prompts, context, model choices, and outputs.
- Policy logs: which policies were evaluated, their results, and any overrides.
- Operator logs: human interventions, approvals, and corrections.
Log structure
- Each log entry references a correlation ID and environment.
- Logs are queryable by user, system, policy, and time range.
- Streaming Compliance (see product page) builds on this log schema.
Policy automation
Compliance policies are represented as executable rules evaluated in near real-time against AI events. They are versioned, testable, and explainable.
- Rule definitions: declarative policies referencing event fields and contextual metadata.
- Evaluation engine: deterministic runtime that evaluates rules and emits decisions.
- Controls: block, allow, flag, or escalate actions with clear downstream behavior.
The Streaming Compliance product page describes how these concepts assemble into an end-to-end system for continuous compliance monitoring.