Your project models contain sensitive financial assumptions, proprietary engineering data, and investment-grade analyses. Aire Labs is built with the controls and practices required to keep that data protected.
Every organization in Aire Labs is fully isolated. Your data—models, terms, scenarios, assumptions, and outputs—is never shared with or visible to other organizations. Access is enforced at the platform level, not just by convention.Within your organization, access is controlled by user roles. Only users you’ve authorized can view or edit your projects.
Data is encrypted in transit and at rest. All communication between your browser and the Aire Labs platform uses industry-standard TLS encryption. Stored data is encrypted at rest.
Models you upload during ingestion are handled securely and used solely for the purpose of structuring your project on the platform. Your original files are never shared with other organizations or used for any purpose outside your project.
Access to Aire Labs is authenticated. Users must be explicitly granted access to your organization—there is no open or anonymous access to your projects or data.
You can control access so that specific team members only see the data relevant to them. Aire Labs supports granular permissions per user, letting you limit visibility and editing rights across projects within your organization.
Aire Labs uses enterprise APIs from Google, Anthropic, and OpenAI to power Project AI. These integrations operate under enterprise terms that include no training on customer data—the same data protection model used by services like Microsoft 365. Contractual data protection is in place with each provider.Your model data is used to answer your questions within a session and is not used to train or improve any external AI models.
Aire Labs is pursuing SOC 2 certification. For security documentation, compliance questionnaires, or specific requirements related to your organization’s procurement process, contact your Aire Labs project lead.