As organizations race to adopt Microsoft Copilot and other AI tools, one critical question often gets overlooked: Is your data secure enough to support AI?
Before deploying Copilot, it’s essential to understand how your data is being accessed, shared, and protected across Microsoft 365. That’s where the quick-start Copilot Discovery comes in.
Designed to run in a non-enforcing, insight-only mode, this exercise uses Microsoft Purview capabilities to uncover risks like shadow AI usage, insider threats, and overshared sensitive data, without impacting day-to-day operations.
These simple steps will give you a clear picture of your current data security posture and help set the foundation for a secure, confident Copilot rollout.
Whether you’re just starting to evaluate Copilot or already planning your rollout, these insights will help you create a compliant and sustainable foundation for AI.
What’s Involved:
This exercise uses Microsoft Purview capabilities (included in M365 E5 Compliance trial) and walks you through enabling:
-
- Data Security Posture Management (DSPM)
- DSPM for AI
- Insider Risk Analytics
- Communication Compliance in Test Mode
- NIST AI Risk Management Framework (RMF 1.0) Assessment
Don’t have Microsoft 365 E5 Compliance? No problem — Activate your 90-day trial here (up to 300 users).
Assign Required Roles
Assign users one of the following roles to access DSPM and run assessments:
- Data Security Management
- Data Security Viewer (read-only access)
- Insider Risk Management Admin
- Microsoft Entra Global Administrator (only if needed)
Enable Data Security Posture Management (DSPM)
Data Security Posture Management (DSPM) provides a broad organizational view of your data security posture. It helps identify unprotected sensitive data, oversharing, and risky user behavior across Microsoft 365 services like SharePoint, OneDrive, and Teams. Enabling DSPM also activates analytics for Insider Risk Management and Data Loss Prevention (DLP), which are foundational for further insights.
What to Do:
1. Go to Microsoft Purview > Data Security Posture Management.
2. In the ‘Get started’ section, click ‘Enable DSPM Insights’.
3. Wait up to 72 hours for the initial scan to complete.
4. Review dashboards for oversharing, sensitive data exposure, and recommendations (do not apply them yet).
Enable Data Security Posture Management for AI (DSPM for AI)
DSPM for AI focuses on AI-specific risks such as shadow AI usage, Copilot interactions, and sensitive data in prompts and responses. It provides visibility into how AI tools are used across the organization, critical for understanding AI-related data exposure risks in discovery-only mode.
What to Do:
1. Go to Microsoft Purview > Data Security Posture Management.
2. In the ‘Get started’ section, locate ‘Extend your insights for data discovery’.
3. If prompted, click ‘Activate Microsoft Purview Audit’.
4. Enable the following two policies:
– Enable analytics for Insider Risk Management
– Enable analytics for Data Loss Prevention (DLP)
5. Navigate to the DSPM for AI dashboard or AI Hub.
6. Review Recommendations, Reports, and Data Assessments for AI-specific insights.
Enable Insider Risk Analytics
Insider Risk Analytics enables passive monitoring of user behavior to detect potential insider threats. It leverages analytics activated during DSPM setup to surface risky activities without enforcing policies.
What to Do:
1. Go to Microsoft Purview > Insider Risk Management.
2. In the ‘Get started’ section, click ‘Turn on analytics to scan for potential risks’.
Enable Communication Compliance (Discovery Mode)
Communication Compliance allows monitoring of internal communications (e.g., Teams, Exchange) for policy violations such as data leaks or harassment. Running in ‘test mode’ ensures no user alerts or enforcement actions are taken, making it ideal for discovery.
What to Do:
1. Go to Microsoft Purview > Communication Compliance.
2. Create a policy using a built-in template (e.g., Data leaks).
3. Set the policy to ‘Test mode’.
4. Assign reviewers and activate the policy.
Run the NIST AI RMF 1.0 Assessment
The NIST AI Risk Management Framework (RMF) 1.0 provides a structured approach to assess your AI governance posture. It evaluates your organization across four categories: Govern, Map, Measure, and Manage — all in a non-enforcing, insight-only mode.
What to Do:
1. Go to Microsoft Purview > Assessments > AI Risk Assessments.
2. Select ‘NIST AI Risk Management Framework (RMF) 1.0’.
3. Run the assessment and export results for internal review.
Ready to Turn Insights into Action?
To help you get the most value from your discovery exercise, we’re offering a complimentary 1-hour session with our data security and governance expert.
In this session, Elantis will:
-
Walk you through your current data security posture
-
Highlight key findings from your Microsoft Purview insights
-
Draft a high-level roadmap focused on data security, compliance, and governance
You’ll walk away with a clearer understanding of your top risks and a plan to mitigate them effectively. Take the next step toward a trusted, secure, and future-ready Copilot deployment.
👉 Contact Us to book your session, or email us at info@elantis.com