Box AI Trust

Enabling the responsible and secure use of enterprise-grade  AI

Box AI prompt examples

With the adoption of AI, enterprises face unique security, privacy and compliance challenges that must be carefully addressed as regulations continue to evolve. Learn how we at Box support the responsible and secure use of AI and Box AI. 

Box AI Principles

The Box AI Principles outline how we protect our customers’ interests and their content. It’s our framework for responsible use of AI within Box, promoting transparency around how Box will use AI and highlights the need for responsible customer use. By adhering to these guidelines, we can all capitalize on the benefits of AI without compromising the integrity of proprietary data or operations.

Our commitment to promoting transparency in AI

We are committed to being transparent about our AI practices, technology, vendors, and data usage. When you choose to enable Box AI, it’s important to us that you understand how Box AI is powered by integrations with large language models (LLMs) supplied by our trusted AI model providers. We continually monitor our integrations with trusted AI service partners to ensure that outputs generated by Box AI remain valid and reliable. 

AI security and visibility

Stay confident with AI capabilities that mitigate security threats — plus robust encryption, access controls, and advanced security measures (all extended to Box AI). Maintain transparency across AI-enabled products and users, and rest assured that we never train on enterprise data used by AI model providers without your prior approval. Leverage the latest AI solutions, like secure RAG, while maintaining peace of mind with our commitment to protecting your sensitive data.

AI privacy

Securing our customers’ content and protecting their privacy rights is at the heart of the Intelligent Content Cloud. That’s why when we created Box AI, we took a privacy-first approach. We apply privacy requirements to our product designs, policies, and third party service provider assessments — and we’re proud to maintain many of the highest standards and certifications in privacy and data protection.

AI compliance

Box prides itself on having a robust compliance program that maintains various certifications across industries and geographies, as reflected on our Trust Center. We work with government agencies, industry associations, and our auditors to incorporate new requirements and controls around AI as they become available. We will continue to drive the same frictionless compliance for Box AI that customers have come to expect across all our solutions.   

AI governance

Our AI Governance Program regularly reviews legal, regulatory, technical, compliance, and security considerations as well as requirements specific to industry, sector, and business purposes. Our program incorporates standards such as the National Institute of Standards and Technology AI Risk Management Framework (NIST AI RMF) and the Organization for Economic Co-operation and Development (OECD) AI Principles.

Box AI trusted models

Frequently Asked Questions

What is Box’s approach to incorporating AI into its product offering?
What AI models does Box integrate with?
Does Box permit AI models to train on customer content?
Who has or will have access to the Box AI prompts and resulting outputs?
How has Box implemented responsible AI?
Does Box have an AI governance program?

Resources

Box AI governance
AI governance: Safeguarding trust and data privacy
Read blog
Box video event icon
Protecting privacy in an AI-driven world
Watch now
Box video event icon
The evolving landscape of AI regulations
Watch now
Box video event icon
Ensuring data privacy and trust across AI innovations

Ready to get started?