Last updated: August 10, 2025
Shadow AI compliance is no longer a niche concern—it is the fastest‑growing source of audit findings across ISO‑aligned organizations. As employees adopt AI features embedded in familiar enterprise tools and suppliers integrate models into services, your risk posture changes without a formal change request. This guide maps the blind spots, shows how to detect shadow AI in weeks—not months—and provides a practical third‑party AI governance checklist that stands up to ISO 9001:2015, ISO/IEC 27001:2022, ISO/IEC 27701, and ISO/IEC 42001 scrutiny while aligning to the NIST AI Risk Management Framework and the EU AI Act.
Use this playbook if you are a quality leader, compliance officer, CIO/CISO, or executive who needs an evidence‑ready approach to AI governance across internal use and supplier ecosystems. It includes controls, contract language, metrics, and printable checklists you can deploy today.
- Shadow AI ≠ shadow IT: embedded AI inside approved apps creates invisible data flows and new accountability gaps.
- Third‑party AI risk is systemic: supplier models and enterprise copilots can change behavior overnight via silent updates.
- Audit‑ready in 30 days: combine discovery, triage, contracts, and monitoring for defensible evidence.
- Anchor in ISO & NIST: link every control to ISO 9001:2015 8.4, ISO/IEC 27001:2022 Annex A (e.g., A.5.31, A.5.33, A.8.24), ISO/IEC 27701, ISO/IEC 42001, and NIST AI RMF (Govern/Map/Measure/Manage).
Table of Contents
- Why “shadow AI compliance” matters now
- What counts as shadow AI in 2025? (and what doesn’t)
- Shadow AI vs. Shadow IT vs. Third‑party AI risk
- How to anchor governance in ISO & NIST without boiling the ocean
- Enterprise AI platforms: What Copilot and Gemini mean for compliance
- Supplier contracts: The AI clauses you need this quarter
- 10 prioritized controls for 30‑day audit readiness
- Common misconceptions and how to fix them
- Printable checklist: Shadow AI & Third‑party AI audit prep
- FAQs
Why “shadow AI compliance” matters now
- Embedded AI inside approved SaaS (email, productivity, chat) adds model‑driven processing where none existed before.
- Supplier models change rapidly; silent updates can alter behavior, risk levels, and data flows without notice.
- Regulators are focusing on AI governance: NIST AI RMF (US), the EU AI Act, and audits against ISO/IEC 42001 are rising.
“The goal isn’t zero shadow AI—that’s a fantasy. It’s bringing it into the light with a coherent policy, contracts, and controls.”
What counts as shadow AI in 2025? (and what doesn’t)
Shadow AI is any unapproved, unmonitored, or undocumented use of AI models, tools, or AI‑enabled features—including those inside approved software. Examples:
- Using a public LLM with proprietary drafts or PII.
- Activating AI features in productivity suites without governance alignment.
- Vendors enabling AI in their platforms without updating the data‑processing addendum.
What doesn’t count: governed pilots in an approved sandbox, AI inside a product with documented DPIA/AIA (or AI impact assessment), and data‑minimized experiments under policy.
Shadow AI vs. Shadow IT vs. Third‑party AI risk
Dimension | Shadow IT | Shadow AI | Third‑party AI Risk |
---|---|---|---|
Core issue | Unapproved tools | Unapproved AI capabilities (often inside approved tools) | Supplier model behavior, updates, and data handling |
Data exposure | Storage location & access | Prompts, outputs, embeddings, telemetry | Cross‑border transfers, vendor training use |
Governance anchor | ITSM/ISMS | AIMS (ISO/IEC 42001), ISMS (ISO/IEC 27001), PIMS (ISO/IEC 27701) | Vendor management (ISO 9001 8.4), DPA/SCCs, AI addenda |
Regulatory lens | Security & privacy | NIST AI RMF; EU AI Act transparency/oversight | EU AI Act deployer/provider duties; sector rules |
How to anchor governance in ISO & NIST without boiling the ocean
1) Establish an AIMS and connect it to your ISMS & QMS
- ISO/IEC 27001 Annex A: A.5.31 (Legal, regulatory & contractual), A.5.33 (Protection of records), A.8.24 (Use of cryptography) for prompt/response security and audit logs.
- ISO 9001:2015 8.4: Control of externally provided processes—extend to AI providers, models, and updates.
- ISO/IEC 27701: PIMS add‑on for privacy roles and DPIA/FRIA where AI processes PII.
2) Apply the NIST AI RMF functions
- Govern: policy, roles, inventories, decommissioning.
- Map: context, users, data, impacts, cross‑border flows.
- Measure: model risks, security controls, metrics.
- Manage: treatment, exceptions, incidents, supplier oversight.
Enterprise AI platforms: What Copilot and Gemini mean for compliance
- Microsoft 365 Copilot: Enterprise Data Protection for prompts/responses; admin controls via Copilot Control System, Purview retention/audit, Entra access, Defender for Cloud Apps; prompts aren’t used to train foundation models.
- Security Copilot: Customer Data sharing configurable; storage within home Geo unless opted‑in; retention/audit supported.
- Google Workspace Gemini: Workspace data not used to train models outside the domain without permission; data regions, AI classification, DLP, IRM, and audit logs apply; ISO 42001 certification noted.
Supplier contracts: The AI clauses you need this quarter
- Model disclosure & change notification: model type/version, guardrails, and advance notice for material updates.
- Data use & training restrictions: no training on prompts/outputs without written consent; clarify telemetry and log retention.
- Security & privacy controls: encryption; isolation/tenant boundaries; cryptography per ISO/IEC 27001 A.8.24; PII handling per ISO/IEC 27701.
- Residency & transfers: data region settings; SCCs; sub‑processor registers for AI.
- Testing & assurance: vulnerability testing, red‑teaming, safety evaluations; evidence on request.
- Incident reporting: AI‑specific incident definitions (prompt injection, data leakage), 24–72h notification windows, cooperation.
- Audit & evidence: retain AI interaction logs; provide policies, DPIA/AIA, NIST mapping, ISO certs.
- Use‑case restrictions: prohibited categories; transparency and human oversight for high‑risk use cases.
10 prioritized controls for 30‑day audit readiness
Control Set A — Discover & Triage
- AI Asset Inventory: catalog AI systems, copilots, models, plugins, vendors, data categories, regions.
- Shadow AI Detection: monitor DNS/proxy for AI endpoints; survey business units; simple declaration form.
- Risk Classification: tag use cases and map to treatment plans.
Control Set B — Policy & Contracts
- Acceptable Use Policy: forbid secrets/regulated data in prompts; list permitted channels.
- Supplier AI Addendum: model change notice, data‑use limits, incident reporting, audit clauses.
Control Set C — Platform & Data
- Retention & Audit: archive AI interactions (e.g., Purview/Vault).
- DLP & IRM: DLP on AI outputs; IRM to prevent AI retrieval from sensitive sources.
- Residency & Access: data regions; least privilege; restrict external web grounding if needed.
Control Set D — Safety & Assurance
- Red‑team & Evaluation: run jailbreak/prompt‑injection testing on critical flows; record findings.
- Human Oversight: reviewers for high‑risk outputs; sign‑offs where applicable.
Common misconceptions and how to fix them
“We don’t use public LLMs, so we’re safe.”
Shadow AI often happens inside approved tools. Configure enterprise copilots; don’t assume defaults meet obligations.
“Shadow AI is only an IT problem.”
It’s a governance and supplier problem. Link QMS, ISMS, PIMS, and AIMS so ownership is clear.
“Banning AI is the answer.”
Bans drive usage underground. Provide a safe, sanctioned path with managed copilots and guardrails.
Printable checklist: Shadow AI & Third‑party AI audit prep
Recommended internal resources
FAQs
How do we scope “shadow AI compliance” in a way auditors accept?
Define shadow AI as unapproved or undocumented AI usage—including embedded AI in approved tools—and govern it via ISO/IEC 42001 linked to your ISMS/QMS. Keep an inventory, classify risks per NIST AI RMF, and retain AI interactions for evidence.
What’s new about third‑party AI risk vs. traditional vendor risk?
AI suppliers may change model behavior via silent updates, alter data flows, or introduce web grounding. Contracts need clauses on change notice, training restrictions, residency, red‑teaming access, and AI‑specific incident reporting.
Which ISO/IEC 27001 Annex A controls matter most for AI right now?
A.5.31 legal/contractual compliance, A.5.33 protection of records (retain AI interactions), A.8.24 cryptography for prompts/logs, plus access control & monitoring.
How do Microsoft 365 Copilot and Google Gemini affect compliance scope?
Both provide enterprise commitments, but you must configure retention, data regions, DLP/IRM, and audit. For Copilot, see Enterprise Data Protection and CCS; for Gemini, use data regions, audit logs, AI classification, and DLP.
What does the EU AI Act require of deployers vs. providers?
Deployers must ensure human oversight, monitoring, incident reporting, and (for certain high‑risk systems) FRIA. Providers must meet transparency, conformity assessment (for high‑risk), and post‑market monitoring. Allocate roles and evidence in contracts.
Do we need separate privacy impact assessments for AI?
Where AI processes PII, align DPIA (ISO/IEC 27701) with AI system impact assessments (ISO/IEC 42001) and the NIST AI RMF Map function. For EU AI Act categories, perform FRIAs where required.
What metrics should we track to prove control effectiveness?
Inventory coverage, sanctioned vs. unsanctioned usage, time‑to‑detect shadow AI, DLP hits on AI outputs, number of supplier AI addenda signed, red‑team tests closed, and audit finding closure rates.
How do we limit data leaving our tenant during AI use?
Use vendor settings for data residency; restrict web grounding where necessary; enforce IRM on confidential sources; prefer enterprise channels with EDP/privacy commitments; and block personal LLM accounts on corporate networks.
What’s the best first step if we have nothing formal today?
Publish an AI acceptable‑use policy, start an AI asset inventory, and prioritize Copilot/Gemini configurations (retention, audit, DLP). In parallel, adopt a lightweight AIMS charter mapped to NIST AI RMF.