Ritual AI Security & Privacy Practices
At Ritual, we aim to be transparent about how AI features work, how customer data is handled, and what security and privacy controls apply across different deployment models.
Deployment Context
Ritual supports multiple deployment models, and the exact AI architecture depends on the model your organization selects:
- Shared SaaS — Ritual hosts and operates the platform in a managed multi-tenant environment.
- Private Cloud — Ritual is deployed into dedicated customer-controlled infrastructure, typically within your cloud environment or VPC.
- Air-Gapped — The full Ritual stack, including AI components, runs inside your controlled environment with no required external network egress.
Unless otherwise noted, the practices described below reflect Ritual’s default managed deployment posture. Certain infrastructure, provider, retention, and model-execution details may differ for customer-hosted or air-gapped deployments.
What is Ritual AI?
Ritual AI is a collection of AI-powered products and features, which may include:
- AI Assistant — Helps generate written content, answer questions, and support brainstorming by leveraging information from across your Ritual workspace and, where enabled, external sources such as the web.
- Document Generation — Generates text across multiple records or workflows simultaneously based on pre-configured prompts and business context.
- Custom Domain Models — Supports organization-specific model behavior, including customer-authorized fine-tuning or adaptation for customers that require a more tailored AI experience.
Ritual AI features appear natively within the product experience, while relying on different model and infrastructure configurations depending on deployment model and customer requirements.
Data Security & Privacy
-
No Shared-Model Training by Default
By default, Ritual does not use customer content or personal data to train or improve shared foundation models, third-party AI services serving other customers, or Ritual capabilities operated across customers. -
Optional Customer-Specific Model Adaptation
For organizations that want a more tailored model experience, Ritual can support an explicit, customer-authorized path to fine-tune or adapt a dedicated model using that organization’s own data for that organization’s sole benefit. Any such configuration is isolated to the applicable customer deployment boundary and governed by the relevant security, data handling, and commercial terms. -
Secure Transmission
When data is transmitted between Ritual services and applicable AI providers or model endpoints, it is encrypted in transit using TLS. -
Scoped Retention & Processing Controls
Ritual is designed to use enterprise-grade AI processing controls, including zero-retention or equivalent safeguards where available and appropriate. Exact retention and processing behavior depends on the selected deployment model, configured providers, and applicable contractual terms. -
Aggregated Product Improvement Data
Ritual may use aggregated and de-identified usage information, such as feature usage patterns, reliability metrics, and performance signals, to improve the platform. This does not include using identifiable customer content to train shared AI models. -
Permission-Aware Data Access
Ritual is designed so that AI features only process data the requesting user is already authorized to access within the applicable workspace or deployment boundary.
Auditability & Output Quality
Ritual is designed to improve answer quality and auditability by grounding model outputs in enterprise context, including source documents, structured metadata, and platform logic.
Rather than treating model output as unconstrained generation, Ritual uses retrieval and orchestration patterns that help tie responses back to underlying source material and business context. This supports reviewability, traceability, and stronger human oversight for high-stakes enterprise use cases.
It is more accurate to say Ritual is designed to reduce hallucination risk and improve explainability, rather than eliminate model error entirely.
Ritual AI Infrastructure
What is a Subprocessor?
A Subprocessor is a third party engaged by Ritual Mobile, Inc. (and its applicable affiliates) to process customer personal data in connection with providing the Services.
List of Third-Party Subprocessors
The subprocessors listed below reflect Ritual’s default managed-service posture. Depending on deployment model, customer-specific architecture, or contractual requirements, some of these subprocessors may not apply, and additional customer-controlled infrastructure or providers may be used instead.
Infrastructure
We use these subprocessors for hosting and running our Services in applicable managed deployments.
| Subprocessor | Purpose of Processing | Location |
|---|---|---|
| AWS | Hosting and storage systems | USA |
| Cloudflare | Web content delivery and edge security services | USA |
Platform
We use these subprocessors to help operate and deliver Ritual AI in applicable configurations.
| Subprocessor | Purpose of Processing | Location |
|---|---|---|
| AWS – Bedrock | Securely hosting and running generative AI models within the Ritual platform in supported configurations | – |
| OpenAI | Service provider for large language models and embeddings in supported configurations | USA |
| Sentry | Application logging and error monitoring | USA |
| SendGrid | Email delivery | USA |
Customer and Support Services
We use these subprocessors to provide customer communications and support operations.
| Subprocessor | Purpose of Processing | Location |
|---|---|---|
| Intercom | Support services | USA |
| Zendesk | Support services | USA |
Business Operations
We use these subprocessors to support internal operations and service analytics.
| Subprocessor | Purpose of Processing | Location |
|---|---|---|
| Event logging and analytics | USA | |
| PostHog | Event logging and analytics | USA |
Last updated: March 9, 2026