← Support

AI Features

How the Veri-Tech platform uses AI, what data is shared with the AI provider, and what is never sent. This page is designed for security teams and compliance reviewers.

No tenant configuration, no user data, and no credentials are ever collected by Claude.

AI Designed for Security Teams

AI in the Veri-Tech platform operates on compliance metadata only — never on your tenant configuration or user data.

The Veri-Tech platform uses Anthropic's Claude to generate compliance insights, remediation plans, and interactive guidance. Our AI features are built with a strict data minimisation principle: only the information needed to produce a useful output is ever sent — specifically, control metadata from our own registry and your aggregate scan scores. No tenant configuration, no user data, no credentials. The sections below document exactly what each feature sends.

What the AI can see

  • Your compliance scores and pass/fail counts
  • Control titles and descriptions from our registry
  • Pre-authored guidance text we maintain internally
  • Framework names and tags (e.g. CIS, NIST)

What the AI never sees

  • Graph API tokens or any credentials
  • Tenant IDs, user accounts, or directory data
  • Your actual policy configurations
  • Device, mail, or file content of any kind
AI Insights
Professional+

Summarises your overall compliance posture — top risks, passing domain highlights, and recommended next steps. Model: Claude Haiku 4.5.

Trigger: Automatically generated after each compliance scan completes.Location: Assessment results page.Model: Claude Haiku 4.5

Data sent to Anthropic

  • Aggregate compliance score (overall and per domain)
  • Control pass/fail counts per framework (CISA, CIS, NIST, etc.)
  • Top failing control titles and their severity
  • Framework names and versions assessed

Output stored

  • Plain-text summary narrative (stored in Azure Table Storage, scoped to your job)
AI Remediation Plan
Professional+

Produces a prioritised, step-by-step remediation plan for your failing controls, grouped by risk tier. Model: Claude Haiku 4.5.

Trigger: On-demand — you click "Generate Plan" on the remediation page.Location: Compliance remediation planning page.Model: Claude Haiku 4.5

Data sent to Anthropic

  • Failing control IDs, titles, and descriptions from the Veri-Tech control registry
  • Assigned severity and disruption-risk ratings (internal metadata, no tenant config)
  • Framework tags (e.g. CIS 1.1.2, NIST AC-2)
  • Pre-authored remediation guidance text from the control registry

Output stored

  • Structured remediation plan (stored in Azure Table Storage, scoped to your job)
Compliance Copilot
Enterprise

Interactive chat assistant that answers questions about your specific assessment results, control requirements, and remediation options. Model: Claude Sonnet 4.6.

Trigger: On-demand — floating chat panel on the assessment results page.Location: Assessment results page (floating panel).Model: Claude Sonnet 4.6

Data sent to Anthropic

  • Aggregate compliance scores (overall, per domain, per framework)
  • Failing control IDs, titles, severity, and domain — no tenant identifiers
  • Pre-authored guidance text from the Veri-Tech control registry
  • Your chat message history within the current session (max 2,000 chars per message)

Output stored

  • Streaming text responses (not persisted — session only, cleared on page close)
Data never sent to the AI provider

A complete list of data categories that are explicitly excluded from all AI prompts across every feature.

Microsoft Graph API tokens or credentials of any kind
Azure AD / Entra ID tenant identifiers, tenant IDs, or display names
User account details, email addresses, or display names from your tenant
Security group memberships, role assignments, or user UPNs
Conditional Access policy configurations or named location data
Device compliance policies, MDM profiles, or Intune configurations
Exchange mail flow rules, connectors, or message content
SharePoint site contents, files, or document libraries
Audit logs, sign-in logs, or activity data
Your billing information, subscription details, or payment methods
Veri-Tech account credentials or internal JWT tokens
Provider, data handling & technical controls

Details about our AI provider (Anthropic), the DPA, data retention, Zero Data Retention option, data residency, and how to opt out.

AI provider

Anthropic (maker of Claude). Requests are made server-side from Veri-Tech infrastructure — your browser never calls Anthropic directly. Anthropic's Trust Center and compliance artifacts are available at trust.anthropic.com.

No tenant identifiers in prompts

Tenant display names, tenant IDs, and plan tier are never included in prompts sent to Anthropic. AI features operate only on anonymised compliance metadata — aggregate scores, control IDs from our registry, and pre-authored guidance text.

Input size limits

Copilot chat messages are capped at 2,000 characters server-side. The conversation history sent per request is limited to 50 messages. These limits prevent bulk data from being accidentally or intentionally submitted.

Data retention by Anthropic

Prompts and responses are not used to train Anthropic models. By default, API inputs and outputs are retained for 7 days for trust-and-safety purposes, then deleted. Zero Data Retention (ZDR) — where data is processed in real-time and immediately discarded — is available under a signed Anthropic enterprise agreement.

Data Processing Agreement (DPA)

A DPA with Standard Contractual Clauses (SCCs) is available from Anthropic. It is automatically incorporated into their commercial API terms — no separate signature is required. Veri-Tech's acceptance of Anthropic's commercial terms constitutes acceptance of the DPA, establishing Anthropic as a data processor and Veri-Tech as the controller.

Data in transit

All requests to Anthropic are made over TLS 1.2+. The API key is stored as a secret reference in Azure Key Vault and never exposed to the portal or client browsers.

Data at rest

AI-generated outputs (Insights and Remediation Plans) are stored in Azure Table Storage in the same tenant-partitioned structure as your scan results. Copilot chat responses are not persisted by Veri-Tech.

Data residency

Anthropic processes API requests in the United States. Anthropic does not currently offer dedicated EU or APAC data centres. Organisations with strict data residency requirements should evaluate Zero Data Retention (available via enterprise agreement) as the primary compliance control, or contact us to discuss options.

Tier gating

AI features are enforced server-side by tier checks on every request. A Starter plan tenant cannot receive AI-generated content even if the UI were manipulated.

Opting out

AI Insights are generated automatically for Professional+ plans after each scan. If you prefer not to use AI features, contact support@veri-tech.net — we can disable AI generation for your account. Copilot chat is always on-demand and never runs unless you open the panel and send a message.

Questions about AI data handling? Contact us · Review Graph API permissions · Privacy Policy