How can we get our data ready before rolling out AI like Copilot?
Before you deploy AI, focus on data hygiene so AI doesn’t amplify existing data risks. The white paper outlines four key steps:
1. **Know your data**
Many organizations lack visibility into their sensitive information—30% of decision-makers say they don’t know where all their business‑critical data is. Start by:
- Using tools like **Microsoft Purview Content Explorer and Activity Explorer** to locate sensitive data and see how it’s being used.
- Classifying and labeling sensitive data with **built‑in or custom sensitive information types (SITs)**.
- Enabling users to apply **sensitivity labels directly in Microsoft 365 apps** as they work.
2. **Govern your data**
Non‑compliant AI usage can lead to regulatory issues and fines. Before AI deployment:
- Review and clean up **SharePoint sites and permissions**; identify sites and files with overly open access and remediate them.
- Apply **SharePoint‑wide content management policies** and delete old or obsolete data.
- Use **Microsoft Purview machine learning classifiers** to detect and mitigate risks.
3. **Protect your data**
Copilot respects existing labels and permissions. Any output generated will inherit the **highest sensitivity level** of the referenced files the user is allowed to see. To take advantage of this:
- Use **Microsoft Purview Information Protection** to classify, label, and protect data based on sensitivity.
- Configure labels to apply protections such as **encryption, rights management, and watermarks**, and to define who can access what.
- Use a **unified labeling solution** across Microsoft apps, services, security tools, and devices so protection is consistent.
4. **Prevent data loss**
You want to avoid both losing business‑critical data and having users send sensitive data into AI prompts:
- Set up **Microsoft Purview Data Loss Prevention (DLP)** policies to prevent data exfiltration via cloud uploads, USB, external sharing, and more.
- Extend DLP and labeling to **Windows 10 devices, Chrome, on‑premises file shares, SharePoint libraries, and Teams chats and channels**.
- Use DLP to specifically control how **AI‑generated content and sensitive prompts** can be shared.
By following these steps before you turn on AI, you reduce the risk of data oversharing, leakage, and non‑compliant usage once tools like Copilot are in everyday use.
Why should we consider Copilot for Microsoft 365 instead of other AI tools?
Many employees are already using AI at work—**75% of knowledge workers** use AI, and about **78% of AI users are bringing their own tools**. That creates shadow AI and security gaps, especially when organizations lack visibility into where sensitive data is going.
Copilot for Microsoft 365 is positioned as a more secure and governed alternative because:
1. **It builds on your existing Microsoft 365 security and compliance**
- Copilot runs on top of your current **Microsoft 365 environment**, using the same identity, access controls, and compliance tools you already rely on.
- It only accesses content that the **user is already authorized to see**, so existing permissions remain the primary control.
2. **You keep control of your data**
- Your data is **encrypted** and is **not used to train the foundational LLM models** behind Copilot.
- You maintain control over **data location and residency**, including options like the **EU data boundary** for storing and processing.
- For web‑grounded prompts, you receive **commercial data protection** for the latest web data used.
3. **It supports a shared responsibility model for AI**
The white paper highlights a shared responsibility model across three layers:
- **AI platform** (provided by Microsoft).
- **AI application** (Copilot itself).
- **AI usage** (how your users interact with it).
Microsoft provides secure platform and application capabilities, while you focus on data classification, access controls, and user governance.
4. **You can choose a deployment path that matches your risk posture**
- Before rollout, you complete an **optimization assessment** that evaluates your licensing, data security posture, and readiness.
- Based on that, Microsoft recommends either a **Core** path or a **Best‑in‑class** path, allowing you to align Copilot deployment with your security and compliance expectations.
5. **You can layer additional protections on top**
- Configure **sensitivity label policies** so Copilot outputs inherit labels and protections from source content.
- Use **Microsoft Purview** capabilities to discover AI risks, protect sensitive data, and govern usage.
Because Copilot is integrated into Microsoft 365 and aligned with your existing controls, it helps you reimagine AI adoption in a way that balances productivity gains with data security and compliance, instead of relying on unmanaged, unsanctioned AI tools.
How do we secure and govern Copilot usage once it’s deployed?
Once Copilot is live, the focus shifts from preparation to ongoing visibility, protection, and compliance. The white paper recommends a three‑step approach:
1. **Discover data risks with Microsoft Purview AI Hub**
Many security teams struggle with visibility; **30% of decision‑makers** say they don’t know where or what their sensitive business‑critical data is. AI Hub helps close that gap by:
- Showing **how AI apps (including Copilot and third‑party tools) are being used** across your organization.
- Providing **ready‑to‑use data protection policies** tailored to AI scenarios.
- Surfacing **total AI interactions and associated risk levels**.
- Highlighting **sensitive data shared with Copilot**, **unlabeled files** referenced in prompts, and **overshared SharePoint content**.
This gives you a clearer picture of where to tighten controls.
2. **Protect sensitive data throughout its AI journey**
Organizations are particularly concerned about intellectual property and confidential projects leaking through AI tools. To address this:
- Use **Microsoft Purview** to ensure Copilot responses are always filtered by **user permissions**, so only authorized users see sensitive content.
- Apply **Microsoft Purview Information Protection** controls—encryption, watermarking, autolabeling, and label inheritance—to both prompts and responses.
- Add **sensitivity labels** to data in Microsoft 365 apps and services, SQL Server, Azure Data Lake Storage, and Microsoft Fabric; Copilot will **inherit these labels** in its outputs.
- Use **autolabeling** to automatically apply sensitivity labels based on detected sensitive information.
For third‑party generative AI apps:
- Configure **Microsoft Purview Data Loss Prevention (DLP)** to restrict users from pasting sensitive data into external AI prompts.
- Use **adaptive protection** to block high‑risk users from sharing sensitive data with AI tools while allowing lower‑risk users more flexibility.
3. **Govern Copilot usage and support regulatory compliance**
With AI regulations evolving, compliance and risk teams need traceability and control over AI interactions. Microsoft Purview offers integrated tools that work with Copilot:
- **Audit**: Capture when Copilot interactions occur for accountability and investigations.
- **Data lifecycle management**: Retain and delete Copilot interaction content according to your policies.
- **Communication compliance**: Detect non‑compliant or unethical use of Copilot prompts and responses (for example, content that could relate to insider trading or other prohibited activities).
- **eDiscovery**: Search Copilot interactions and include them in legal or regulatory investigations.
By combining AI Hub visibility, strong data protection, and integrated compliance capabilities, you can use Copilot to reshape how work gets done while maintaining control over data security, regulatory obligations, and acceptable use of AI across your organization.