November 11, 2025 WitnessAI Update

List view
Quick Start
User Guide
Policies & GuardRails
Witness Anywhere: Remote Device Security
Witness Attack
Administrator Guide
 

WitnessAI Release Notes

Release Date: November 11, 2025
 

New Features

File Attachments

💡
Note: In order for scanned attachments to be blocked for an AI App, there must be a Policy with the Data Protection guardrail enabled, the Inspect Attachments checkbox checked, the AI App included as a Destination, and a Guardrail Action of Block. See the File Attachments documentation for details.
Files submitted or attached to AI Application prompts can now be allowed or blocked globally. In addition, some AI Applications can have attachments be blocked or enabled on a per-application basis, and scanned for content that violates your Data Protection Guardrails. Attachments that violate Policies can be blocked or allowed based on your chosen GuardRail Actions.
The list of applications with per-application attachment controls will expand as future updates are released. These applications appear at the top of the Attachment Block Policy.
notion image

Attachments In Conversations

The Conversation Details displays information on files processed. Files less than 100MB are supported. Larger files and files containing malware are blocked.
Responses from applications containing files are not scanned or blocked, allowing users to save them as needed.
notion image
 

Attachment Alerts

Blocked attachments will generate alerts in the Alerts console. The sidebar displays the details of the policy, file, and alert.
notion image

Attachments Supported

Currently all text-based files can be scanned. For example, all files that can be opened and read with a text editor. Examples include:
  • Plain Text: .txt, .log, .eml
  • Source Code: .c, .cpp, .java, .js, .html, .css, .py etc...
  • Markup and Data: .xml, .json, .csv, .tsv
  • Documents: .rtf, .docx , .md (Markdown)
  • Configuration: .ini, .cfg, .rc, .reg
  • Other: .sh (shell scripts), .ps (PostScript)
  • MS Office: .docx .pptx .xlsx
  • PDFs
 

Browser-Based AI Applications Support (Beta)

Witness AI now supports Prompt Observation and Intention Classification for most web browser-based AI Applications currently in the WitnessAI App Catalog.
💡
Customers can opt-in to this beta feature today. Contact WitnessAI Customer Success or your Account Team.
Customers can now get the visibility to many more AI Applications immediately, without waiting for WitnessAI to provide our complete set of capabilities for each Application.
Once enrolled into the Beta via Customer Success, customers can use the new Application filter (on Conversations) to show or hide Beta Applications. The default view is Hide.
When viewing the Beta Applications, users can see all User Prompts on all the Beta Applications. Some of the AI Applications in Beta include Craiyon.com, ideogram.ai, mirage.app, and humbot.ai. During Beta, WitnessAI observes traffic to these AI Applications without any end user impact.
With this feature, customers get much wider visibility into how users are using AI applications within their organization. It solves the problem of fragmented insight, making it easier to monitor risk and usage from one place.
 

Updated Features

Advanced Intention-based Behavior Activity Guardrail

Today, WitnessAI is introducing Advanced Intention-Based Policies, a new capability that makes it easier and more accurate for customers to define and enforce custom Behavior-based Guardrails on how employees use external AI applications. So that organizations can confidently enforce consistent AI usage policies at scale.
With Advanced Intention-Based Policies, policy authors can describe policy context in natural language (by specifying details like what is the policy for, which application, which type of users, and behaviors allowed/disallowed behaviors).
With this natural language context, policy authors can use Witness AI ML-based recommender to readily generate optimized input Behavior rules. In addition, users can also manually provide Input behaviors and use the ML-based refiner to get smarter suggestions on improved Behavior rules.
This feature now reduces guesswork as Policy authors can simply use natural language to configure the Behaviors that improve intention classification accuracy.
Users can fine-tune Behaviors to reduce false positives. For e.g. Users can configure well defined behaviors like ‘Sharing code is not allowed, but sharing Regex is allowed’. Further, this also reduces configuration drift by avoiding overlapping Behaviors for example, “Executive compensation” and “Salary questions”. thereby delivering more reliable analytics.
notion image

Alert Labels

Witness AI now supports adding custom labels to Alerts. Admins or Security Analysts can add custom Labels to classify the Alert for easier management. They can also filter Alerts by Labels.
When Witness AI is enabled at scale across the organization, security analysts might find themselves dealing with a large number of Alerts. Labels make it easy for customers to simplify management. For example, after reviewing an Alert, analysts can mark it ‘Reviewed’, so that they can hide this from their Alerts view, or they mark an Alert as a ‘False Positive’ to provide feedback to WitnessAI team.

Adding a label

When viewing the Alerts console, click on any alert to display the right-side Alert Details panel.
Click the “Edit Labels” button and you can type in your intended label. The drop-down list will update as you type.
If the label you want does not exist, you can enter any text you prefer and that label will be created. A single alert can only have three labels assigned.
notion image
 

Filtering alerts by label

When viewing alerts, you can filter on labels to view the associated alerts.
Click on the Label drop-down, and start to type the label you want to find. The list will update as you type. Choosing a label will filter the list of alerts. Clicking the NOT button to the right of a label will filter the list of alerts to alerts that do not have that label.
You can choose as many labels as you need to. The alerts list will show all that match any of your chosen filters.
Click on the Clear button to clear all filters. To clear only labels, un-click the checked labels. If a NOT condition is active, click the NOT button to clear it.
For example, as a Security Analyst, I may want to find all alerts with a Severity of Critical, that are not yet assigned to anyone. The screenshot below shows the SEV: Critical button chosen, and the SEC: Assigned label chosen with the NOT button enabled.
notion image

Known Issues

When File Attachments are blocked in the Data Protection guardrail, any Warn message will not display in the end user’s application. However, the Alert Details pane will still display a Warn condition.
 

Fixed Issues

The ChatGPT support fix is completed. The interim conditions have all been resolved.
 

Breaking Changes

No breaking changes on this release.