Version 2.0.0 Release Notes
Package version: 2.0.0-preview-0001
This is a major release that introduces new modules, an orchestrator-based architecture, Omnichannel Communications, expanded MCP protocol support, and document handling capabilities.
New Modules
AI Prompt Templates
- Feature ID:
CrestApps.OrchardCore.AI.Prompting - Centralized AI system prompt management with file-based discovery, Liquid template rendering, and prompt composition.
- Standalone
CrestApps.AI.Promptinglibrary can be used in any .NET project without Orchard Core. - All built-in system prompts (chart generation, search query extraction, title generation, task planning, RAG instructions, tabular batch processing, etc.) have been migrated to reusable
.mdtemplate files inAITemplates/Prompts/directories. - Extensible parser infrastructure — Markdown front matter parsing ships by default; YAML, JSON, or other formats can be added by implementing
IAITemplateParser. - JSON compaction — Fenced
```json ```blocks in templates are automatically compacted during parsing to reduce token usage while keeping source files readable. - Template caching — Parsed templates are cached in memory and invalidated on tenant reload or application restart.
- AITemplateBuilder — High-performance builder class for composing system prompts from multiple templates, raw strings, and template IDs with minimal allocations.
- Category-grouped UI — AI Profile and Chat Interaction editors show prompt templates grouped by category with description display and JSON key-value parameter input.
- Parameter descriptors — Templates can declare expected arguments using
Parameters:front matter metadata. The UI displays parameter names and descriptions when a template is selected. - Runtime template consumption — When a prompt template is selected on an AI Profile or Chat Interaction, the template is rendered at runtime during orchestration. The rendered output replaces the custom system message.
- Liquid templates with typed objects — Document availability and task planning templates now receive typed .NET objects (e.g.,
AIToolDefinitionEntry,ToolRegistryEntry,ChatDocumentInfo) and use Liquid loops to render tool and document lists. - RAG-related prompts (response guidelines, scope constraints, tool search instructions) extracted from orchestration handlers into reusable template files.
IAITemplateService.RenderAsyncnow throwsKeyNotFoundExceptionwhen the requested template ID is not found. Previously it returnednull.IAITemplateRendererrenamed toIAITemplateEngine— The interface handles both rendering and validation, so the name better reflects its responsibilities.FluidAITemplateRendereris nowFluidAITemplateEngine.render_ai_templateLiquid tag — New custom Fluid tag for rendering a sub-template within the current Liquid scope.{% render_ai_template "template-id" %}shares parent variables with the included template, enabling composable prompt templates. Variables defined in the sub-template do not leak to the parent. Includes recursion protection (max depth 10). Available in the standaloneCrestApps.AI.Promptinglibrary.- Removed
include_promptLiquid filter — Theinclude_promptfilter has been removed in favor of the more capablerender_ai_templatetag. Replace{{ "template-id" | include_prompt }}with{% render_ai_template "template-id" %}.
AI Profile Templates
- Feature ID:
CrestApps.OrchardCore.AI(part of the core AI module) - Reusable templates for creating AI Profiles with pre-configured system messages, model parameters, tool selections, and connection settings.
- Source-aware architecture — Templates are source-aware with a
Sourceproperty. Two built-in sources are registered:Profile(for AI profile configuration) andSystemPrompt(for reusable system prompt text). Other modules can register additional sources viaAIOptions.AddTemplateSource(). - Metadata-based storage — The
AIProfileTemplatemodel contains only generic fields (Name, DisplayText, Description, Category, IsListable, Source). All source-specific fields are stored as metadata objects in the template'sPropertiesusingAs<T>()/Put<T>():ProfileTemplateMetadata— Profile type, connection, system message, model parameters, tool names, etc.SystemPromptTemplateMetadata— System prompt text only.
- Three-driver architecture — Template editing uses source-aware display drivers:
AIProfileTemplateFieldsDisplayDriver— Generic fields (always visible for all sources).AIProfileTemplateDisplayDriver— Profile-specific fields (only shown for Profile source).SystemPromptTemplateDisplayDriver— System prompt field (only shown for SystemPrompt source).
- Triple-source discovery — Define templates via admin UI (database-stored, runtime-managed), as
.mdfiles inAITemplates/Profiles/directories (module-embedded, read-only), or inApp_Data/AITemplates/Profiles/andApp_Data/Sites/{tenantName}/AITemplates/Profiles/(filesystem, local customization). - Template application — When creating a new AI Profile, select a template from the dropdown and click Apply to pre-fill the form. All display drivers render the pre-populated values. Template properties (except
ProfileTemplateMetadataandSystemPromptTemplateMetadata) are copied to bothprofile.Propertiesandprofile.Settings, ensuring all profile drivers can read applied values. - Generic Templates UI — The admin UI at Artificial Intelligence → Templates shows a source selection modal when creating a new template, similar to the Profiles UI. The controller, views, and admin menu have been renamed from
ProfileTemplatestoAITemplatesfor genericity. - External module drivers — Every module that adds a
DisplayDriver<AIProfile>also provides aDisplayDriver<AIProfileTemplate>so templates capture the full profile configuration. Source-specific drivers use.RenderWhen()to hide sections that don't apply.- AI Tools — Select available tools for the template.
- AI Chat — Admin menu visibility, session settings, data extraction entries, post-session tasks.
- AI Chat Analytics — Session metrics, conversion goals, AI resolution detection.
- AI Documents — Allow session documents toggle. Upload and attach documents directly to templates with text extraction, chunking, and embedding generation.
- AI MCP — MCP connection selections.
- AI DataSources — Data source, strictness, top N documents, filters.
- Document cloning — When a template includes attached profile documents, applying the template to a new profile clones all
AIDocumentandAIDocumentChunkrecords (including pre-computed embeddings) to the new profile. This allows templates to serve as pre-packaged RAG knowledge bases. AIProfileTemplateIndex— YesSql MapIndex for efficient querying bySource,Name,Category,ProfileType, andIsListable.IAIProfileTemplateManager— ExtendsINamedSourceCatalogManager<AIProfileTemplate>withGetListableAsync(). Provides full CRUD and unified read access, merging database and file-based templates. Database templates take precedence when names conflict. Replaces the formerIAIProfileTemplateService.- Deployment support — Export AI Profile Templates via Orchard Core deployment plans. Select all templates or choose specific ones by name.
- Recipe support — Import and export templates using the
AIProfileTemplaterecipe step. Supports create, update by ID or name, and validation. - Markdown front matter — Profile templates use the same parser as prompt templates. Profile-specific keys (
ProfileType,ConnectionName,Temperature,ToolNames, etc.) are extracted fromAdditionalProperties. - Permissions — New
ManageAIProfileTemplatespermission controls access to template management. - Sample templates — Ships with
chat-session-summarizer.md(TemplatePrompt profile for summarizing conversations) as a built-in example template. - Recipe step schemas — When the
CrestApps.OrchardCore.Recipesfeature is enabled, JSON schemas are registered for all AI recipe steps (AIProfile,AIProfileTemplate,AIDeployment,DeleteAIDeployments,AIProviderConnections) providing validation and documentation for recipe authoring. - Extensible — Custom modules can add
DisplayDriver<AIProfileTemplate>to contribute template settings. See the profile templates documentation.
AI Chat Interactions
- Feature ID:
CrestApps.OrchardCore.AI.Chat.Interactions - Provides ad-hoc chat sessions with configurable parameters — users can adjust model settings, attach documents, upload images, and generate charts without requiring a predefined AI profile.
AI Chat Copilot Integration
- Feature ID:
CrestApps.OrchardCore.AI.Chat.Copilot - Integrates with the GitHub Copilot SDK to provide a Copilot-based orchestrator for AI completions.
AI Documents
A suite of modules for document upload, text extraction, and embedding during chat sessions:
| Module | Feature ID |
|---|---|
| AI Documents (Core) | CrestApps.OrchardCore.AI.Documents |
| PDF Support | CrestApps.OrchardCore.AI.Documents.Pdf |
| OpenXml Support (Word, Excel, PowerPoint) | CrestApps.OrchardCore.AI.Documents.OpenXml |
| Azure AI Search | CrestApps.OrchardCore.AI.Documents.AzureAI |
| Elasticsearch | CrestApps.OrchardCore.AI.Documents.Elasticsearch |
- Updated AI Documents settings guidance to use the AI Documents index naming, include indexing-feature enablement guidance, and document production guidance for index profile stability.
- Updated AI Profile document orchestration prompts and document-search output so profile-attached documents are treated as hidden background knowledge, while session-uploaded documents remain user-visible.
- Updated document availability prompt arguments to split profile knowledge-base documents from user-supplied session documents for clearer privacy-aware instructions.
AI Data Sources
Modules for RAG (Retrieval-Augmented Generation) and knowledge base indexing:
| Module | Feature ID |
|---|---|
| AI Data Sources (Core) | CrestApps.OrchardCore.AI.DataSources |
| Azure AI Search | CrestApps.OrchardCore.AI.DataSources.AzureAI |
| Elasticsearch | CrestApps.OrchardCore.AI.DataSources.Elasticsearch |
MCP Resource Adapters
| Module | Feature ID |
|---|---|
| FTP Resources | CrestApps.OrchardCore.AI.Mcp.Resources.Ftp |
| SFTP Resources | CrestApps.OrchardCore.AI.Mcp.Resources.Sftp |
Agent-to-Agent (A2A) Protocol
A new module implementing the A2A protocol for multi-agent interoperability:
| Feature | Feature ID | Description |
|---|---|---|
| A2A Client | CrestApps.OrchardCore.AI.A2A | Connect to external A2A hosts and use their agents as AI tools in orchestration. |
| A2A Host | CrestApps.OrchardCore.AI.A2A.Host | Expose Agent-type AI Profiles as discoverable A2A agents at /.well-known/agent-card.json. |
- A2A Host — Exposes all Agent-type AI Profiles as A2A agents. Each agent gets its own agent card with skills derived from the profile description and configured tools. Supports streaming (
TaskArtifactUpdateEventSSE) and non-streaming responses. Configurable authentication: OpenId Connect (default, integrates with OrchardCore's OpenIddict feature), API Key, or None. Agent cards includeSecuritySchemesso clients know how to authenticate. AnExposeAgentsAsSkilloption consolidates all agents into a single card with multiple skills. - A2A Client — Add connections to external A2A hosts from the admin UI (Artificial Intelligence → Agent to Agent Hosts). Connections support the same authentication types as MCP client connections (Anonymous, API Key, Basic, OAuth 2.0, etc.). Agent cards are cached for 15 minutes per connection with signal-based invalidation on update/delete. Select A2A connections on AI Profiles, Templates, and Chat Interactions under the Capabilities tab. Connected agents are automatically registered as AI tools via
A2AAgentProxyTooland can be invoked by the orchestrator. - AI Discovery Functions — Three new built-in AI tools:
list_available_agents(returns agent cards from all connections and local agents),find_agent_for_task(semantic + keyword search for the best agent), andfind_tools_for_task(semantic + keyword search for relevant AI tools). These help the AI model discover and select the right agent or tool for a given task. - Sample A2A Client — A standalone Razor Pages application at
CrestApps.OrchardCore.Samples.A2AClientfor testing A2A agents. Lists agent cards, supports streaming and non-streaming message sending, and integrates with .NET Aspire.
Omnichannel Communications
A new suite of modules for multi-channel communication:
| Module | Feature ID | Description |
|---|---|---|
| Omnichannel (Core) | CrestApps.OrchardCore.Omnichannel | Core omnichannel services |
| SMS | CrestApps.OrchardCore.Omnichannel.Sms | AI-driven SMS automation with Twilio |
| Event Grid | CrestApps.OrchardCore.Omnichannel.EventGrid | Azure Event Grid webhook integration |
| Management | CrestApps.OrchardCore.Omnichannel.Managements | Contact and conversation management |
Recipes Module
- Feature ID:
CrestApps.OrchardCore.Recipes - Provides recipe steps for configuring CrestApps modules via recipes.
New Features in Existing Modules
AI Services (CrestApps.OrchardCore.AI)
- Orchestrator Architecture — The new
IOrchestratorinterface replaces the previous prompt routing system. The orchestrator manages planning, tool scoping, and iterative agent execution loops. - AI Tool Registration — New fluent API for registering AI tools with
.AddAITool<T>(), supporting categories, purposes, and selectable/system tool modes. - AI Profile Types — Added
UtilityandTemplatePromptprofile types in addition toChat. - AI Deployments — New feature for managing AI model deployments.
- Typed AI Deployments — AI deployments are now first-class typed entities. Each deployment has a
Typeproperty (Chat,Utility,Embedding,Image,SpeechToText) and anIsDefaultflag. Connections are pure connections — deployment name fields (ChatDeploymentName,EmbeddingDeploymentName,UtilityDeploymentName,ImagesDeploymentName) onAIProviderConnectionare deprecated. TheIAIDeploymentManagerservice resolves deployments by type with fallback chain (explicit → connection default → global default). New settings page under Settings > Artificial Intelligence > Default Deployments to configure global defaults. AI Profiles and Chat Interactions now useChatDeploymentIdandUtilityDeploymentIdinstead ofDeploymentId. Deployment dropdowns show all deployments grouped by connection. Existing connection deployment names are automatically migrated to typedAIDeploymentrecords on startup. Both old format (deployment names on connection) and new format (Deploymentsarray) are supported inappsettings.json. See the migration guide for details. - AI Connection Management — UI for managing provider connections from the admin dashboard.
- AI Chat WebAPI — RESTful API endpoints for interacting with AI chat.
- Workflow Integration — AI Completion tasks for Orchard Core Workflows.
MCP (CrestApps.OrchardCore.AI.Mcp)
- MCP Server — Expose your Orchard Core site as an MCP server endpoint, allowing external AI agents to discover and use your tools, prompts, and resources.
- MCP Prompts and Resources — Prompts and resources can be added and managed via the admin UI.
- Templated Resources — Support for dynamic MCP resources defined with URI templates.
- Stdio Transport — Connect to local MCP servers (e.g., Docker containers) via Standard Input/Output.
- Template URI Whitespace Handling — Resource URI templates and incoming URIs are now trimmed of leading/trailing whitespace before matching, preventing mismatches caused by accidental spaces in URI definitions.
- File Resource Directory Rejection — The file resource handler now returns a descriptive error when the resolved path is a directory instead of a file, rather than attempting to read directory content.
AI Agent (CrestApps.OrchardCore.AI.Agent)
- Expanded toolset with 30+ built-in tools covering content management, tenant management, feature toggles, workflow automation, and communication tasks.
- Removed per-tool permission checks — AI tools no longer perform their own authorization checks at invocation time. Permission enforcement is handled at the profile design level by
LocalToolRegistryProvider, which verifies that the user configuring the AI Profile hasAIPermissions.AccessAIToolpermission for each tool they expose. This ensures tools work correctly in anonymous contexts (e.g., public chat widgets, background tasks, post-session processing) without failing due to missing user authentication. CreateOrUpdateContentTool— Owner fallback parameters — Added optionalownerUsername,ownerUserId, andownerEmailparameters. When content is created without an authenticated user (e.g., from an anonymous chat widget), the AI model can specify who the content should be created on behalf of. The tool resolves the user and setscontentItem.OwnerandcontentItem.Authoraccordingly.
Breaking: If you relied on individual AI tools rejecting unauthorized requests at runtime (e.g., IsAuthorizedAsync returning permission-denied messages), this behavior has been removed. All authorization is now enforced at the tool registry level when an AI Profile or Chat Interaction is configured. Ensure your AI Profiles are configured by users with appropriate permissions.
Improvements
MCP Client Authentication (CrestApps.OrchardCore.AI.Mcp)
- Structured Authentication Types — The SSE connection UI now provides a dedicated authentication type selector instead of requiring raw HTTP header JSON. Supported types: Anonymous, API Key, Basic Authentication, OAuth 2.0 Client Credentials, OAuth 2.0 + Private Key JWT, OAuth 2.0 + Mutual TLS (mTLS), and Custom Headers (legacy).
- API Key Authentication — Configure an API key with a customizable header name and optional prefix (e.g.,
Bearer,ApiKey). - Basic Authentication — Provide username and password for HTTP Basic auth. Credentials are Base64-encoded automatically.
- OAuth 2.0 Client Credentials — Acquire access tokens automatically using the
client_credentialsgrant type. Tokens are cached in memory and refreshed before expiration. - OAuth 2.0 + Private Key JWT — Authenticate to OAuth 2.0 token endpoints using a signed JWT client assertion with an RSA private key. Supports optional Key ID (
kid) for identity providers that require it. - OAuth 2.0 + Mutual TLS (mTLS) — Authenticate using a PFX/PKCS#12 client certificate for mutual TLS authentication with the token endpoint.
- Credential Protection — All sensitive fields (API keys, passwords, client secrets, private keys, client certificates, certificate passwords) are encrypted at rest using ASP.NET Core Data Protection. Encrypted values are never exposed in deployment exports.
- Backward Compatibility — Existing connections with raw
AdditionalHeadersare automatically recognized as "Custom Headers" type when editing.
Breaking: IMcpClientTransportProvider.Get() has been renamed to GetAsync() and now returns Task<IClientTransport> instead of IClientTransport. Custom transport provider implementations must update their method signature.
Unified Citation & Reference System
The citation and reference system has been completely reworked so that every AI provider (Azure OpenAI, OpenAI, Ollama, Azure AI Inference) now returns the same citation references. Previously, citations only worked with Azure OpenAI's native data-sources feature (GetMessageContext()); since we now inject context ourselves via preemptive RAG and tool-based search, that approach no longer applied.
What changed:
[doc:N]citation markers are now produced consistently by both Data Source and Document preemptive RAG handlers, as well as by theDataSourceSearchToolandSearchDocumentsToolAI tools.referenceTypeis stored in the knowledge base index so the system knows whether a reference is a Content item, an uploaded Document, or a custom data source type.AICompletionReferencenow includesReferenceIdandReferenceTypeproperties, enabling downstream consumers (hubs, UI) to generate appropriate links.IAIReferenceLinkResolver— a new keyed-service interface for resolving reference links by type. Register custom resolvers withservices.AddKeyedScoped<IAIReferenceLinkResolver, MyResolver>("MyType")to generate links for custom reference types.CompositeAIReferenceLinkResolverdispatches to the correct keyed resolver based onreferenceType. When no resolver is registered, the reference is shown without a link.CitationReferenceCollectorcollects references from all sources (preemptive RAG context, tool-invoked searches) and resolves links in a single pass.- Content item link resolution —
DefaultAILinkGeneratoris registered as a keyedIAIReferenceLinkResolverfor the"Content"reference type. Content item references automatically receive links generated via OrchardCore'sLinkGeneratorwith the standardOrchardCore.Contentsroute. Document references (uploaded files) are shown by filename without a link. DocumentChunkSearchResultnow includesDocumentKeyandFileNameproperties for uploaded document citation tracking.- Azure OpenAI: Removed the deprecated
GetMessageContext()/Citationsextraction logic and theIAILinkGeneratordependency fromAzureOpenAICompletionClient. References are now handled uniformly via the orchestration pipeline. AIInvocationScope/AIInvocationContext— newAsyncLocal<T>-based ambient context that replacesHttpContext.Itemsfor all per-invocation AI data. This ensures full isolation between concurrent SignalR hub calls on the same WebSocket connection, preventing reference leaks, stale data source IDs, and other cross-invocation contamination issues. See the AI Tools documentation for details.- Shared reference counter —
AIInvocationContext.NextReferenceIndex()provides a monotonically increasing, thread-safe counter used by all preemptive RAG handlers and search tools, ensuring[doc:N]indices never collide even when data source and document references are produced in the same request. - Incremental citation delivery — Citation references are now collected and sent to the client progressively during streaming. Preemptive RAG references (from data sources and documents) are resolved before the streaming loop starts, so the first chunk already includes them. Tool-invoked references are merged incrementally during streaming as tools execute. This ensures the JavaScript client can render
[doc:N]superscripts in real-time. ChatSessionmoved toItemsdictionary —AIInvocationContext.ChatSessionproperty has been removed. The chat session is now stored inAIInvocationContext.Items["AIChatSession"]instead. Tools that need the chat session should read fromItems(e.g.,invocationContext.Items.TryGetValue("AIChatSession", out var session)).- IsInScope evaluation moved to orchestrator — The
IsInScopeconstraint is no longer evaluated by individual preemptive RAG handlers (DataSourcePreemptiveRagHandler,DocumentPreemptiveRagHandler). Instead, thePreemptiveRagOrchestrationHandlerevaluates it after all handlers have run. When no references are produced across all sources andIsInScopeis enabled, a scoping directive is injected. When tools are available, the directive encourages the model to try tool-based search before concluding no answer exists, allowingsearch_data_sourceandsearch_documentsto discover relevant content that the initial preemptive search missed. - Tool-search instructions when preemptive RAG is disabled — When preemptive RAG is disabled but data sources or documents are attached, the orchestrator now injects system-message instructions guiding the model to call search tools (
search_data_source,search_documents) to retrieve internal knowledge. WhenIsInScopeis enabled, the model is forced to use only tool-retrieved content and must refuse to answer from general knowledge. WhenIsInScopeis disabled, the model is instructed to try the search tools first and may supplement with general knowledge only if no relevant results are found. - RAG text normalization — Content and titles are now normalized before chunking and embedding using
RagTextNormalizer. HTML tags, Markdown formatting, escaped HTML entities, and extraneous whitespace are stripped to produce clean plain text. This improves embedding quality, reduces token usage when injecting context into prompts, and prevents raw HTML from leaking into reference titles and chat UI. Normalization usesMicrosoft.Extensions.DataIngestion.Markdigfor structured Markdown-to-plain-text conversion, combined with HTML tag stripping and entity decoding. Titles in citation references are also normalized at creation time as a defense-in-depth measure for existing indexed data. - Token-aware chunking — The custom character-based text chunking has been replaced with
DocumentTokenChunkerfromMicrosoft.Extensions.DataIngestion. This uses actual LLM tokenizers (GPT-4oo200k_base) to split content at token boundaries with configurable overlap, producing chunks that align better with embedding model token limits. IngestionDocumentReader-based document parsing — The customIDocumentTextExtractorinterface has been replaced withMicrosoft.Extensions.DataIngestion.IngestionDocumentReader, the standard abstraction fromMicrosoft.Extensions.DataIngestion. Each document module now provides anIngestionDocumentReaderimplementation registered as a keyed singleton by file extension. The built-inMarkdownReaderfromMicrosoft.Extensions.DataIngestion.Markdigis used for Markdown files. Custom PDF and OpenXml readers extendIngestionDocumentReaderfollowing the same patterns used in Microsoft's AI templates. Useservices.AddIngestionDocumentReader<T>(extensions)to register custom readers — this replaces the previousAddDocumentTextExtractor<T>()method.- Inline citation markers — The system prompt now instructs the AI model to include
[doc:N]reference markers inline in its response text, immediately after the relevant statement. This enables users to see which statements are sourced from which references. - Context-gated system tools — The
search_data_sourcesand document processing tools (search_documents,list_documents,read_document,read_tabular_data) are now conditionally included in the tool registry based on context availability.search_data_sourcesis only available when a data source is attached to the AI profile or chat interaction. Document processing tools are only available when documents are attached to the session. This prevents the AI model from seeing tools it cannot use, eliminating hallucinated tool calls and reducing token overhead. TheSystemToolRegistryProviderchecksAICompletionContext.DataSourceIdfor data sources andAICompletionContextKeys.HasDocumentsinAdditionalPropertiesfor documents. - Data source deletion cleanup — When a data source is deleted, all associated document chunks are now properly removed from the master knowledge base index via
IDataSourceVectorSearchService.DeleteByDataSourceIdAsync. Elasticsearch uses nativeDeleteByQuery; Azure AI Search uses filter-based pagination with batch deletion. The cleanup runs as a background job after the HTTP response completes to avoid blocking the admin UI. Previously,DeleteDataSourceDocumentsAsyncresolved the document index manager but never invoked any deletion. - Chat session document cleanup — When an AI chat session is deleted (single or bulk), all uploaded session documents are now cleaned up:
AIDocumentrecords are deleted from the document store, and their vector index chunks are removed via a deferred task from all AI document index profiles. Previously, session deletion only removed the session record, leaving orphaned documents and index entries. - Chat interaction document cleanup — When a chat interaction is deleted, the
AIDocumentrecords associated with it are now deleted from the document store in addition to the vector index chunk cleanup that was already in place. Previously, only the index chunks were removed, leaving the document store records orphaned. - Strengthened tool-search instructions — When preemptive RAG is disabled and data sources or documents are attached, the system prompt now uses mandatory language (
MUST call... BEFORE generating any response) to ensure the AI model calls search tools before answering. Previously, advisory language was used which some models ignored. WhenIsInScopeis off, the model is instructed to search first and may fall back to general knowledge only if no results are found. - Standardized prompt instruction format — All AI system prompts now use a consistent
[Section Header]+ numbered rules format across the codebase. This includes[Rules]for utility prompts (chart generation, data extraction, search query extraction, post-session analysis, tabular processing),[Output Format]for expected output examples, and the existing[Scope Constraint],[Knowledge Source Instructions], and[Response Guidelines]for RAG-related prompts. - Sequential reference display indices — The JavaScript clients (
ai-chat.js,chat-interaction.js) now remap cited reference indices to a sequential 1-based sequence when rendering. If the model cites[doc:2]and[doc:5]but not[doc:1], the user sees superscripts 1 and 2 (not 2 and 5), with the reference list numbered accordingly. This prevents confusing gaps in visible numbering. The remapping uses a two-phase placeholder approach to avoid collisions during index substitution.
Breaking: If you relied on chunk.AdditionalProperties["ContentItemIds"] or chunk.AdditionalProperties["References"] being set on streaming chunks by the Azure OpenAI provider, these are no longer set on individual chunks. References are now collected progressively during streaming from the orchestration context and tool execution context.
Breaking: If you wrote custom AI tools that read AIToolExecutionContext from HttpContext.Items[nameof(AIToolExecutionContext)], update them to use AIInvocationScope.Current?.ToolExecutionContext instead. Similarly, if you read HttpContext.Items["DataSourceId"] or HttpContext.Items["ToolSearchReferences"], these are now on AIInvocationScope.Current.DataSourceId and AIInvocationScope.Current.ToolReferences respectively.
Breaking: If you accessed AIInvocationContext.ChatSession, use AIInvocationScope.Current?.Items["AIChatSession"] instead.
Breaking: The IDocumentTextExtractor interface has been removed. If you implemented a custom document text extractor, migrate it to an IngestionDocumentReader subclass and register it with services.AddIngestionDocumentReader<T>(extensions) instead of AddDocumentTextExtractor<T>(extensions). The reader's ReadAsync method returns an IngestionDocument instead of a raw string — the document processing service extracts text from the IngestionDocument automatically.
Note: Data sources and documents indexed before this release may contain raw HTML or Markdown in their content and titles. To benefit from normalization, re-index your data sources after upgrading.
Performance: Reduced String Allocations with ZString
- Adopted ZString — Replaced
System.Text.StringBuilderwith ZString'sUtf16ValueStringBuilderacross AI system message generation, RAG context building, tool summaries, streaming response accumulation, CSV export, and batch processing. ZString usesArrayPool<char>pooled buffers instead of allocating new internal arrays, significantly reducing GC pressure in hot paths. - Key areas converted:
DefaultMcpMetadataPromptGenerator,DefaultOrchestrator,CopilotOrchestrator,DocumentOrchestrationHandler,DocumentPreemptiveRagHandler,DataSourcePreemptiveRagOrchestrationHandler,SearchDocumentsTool,DataSourceSearchTool,TabularBatchProcessor,AIChatHub,ChatInteractionHub,ChatAnalyticsController,ApiAICompletionEndpoint,GenerateImageTool, and others. - Benchmark project added — A new BenchmarkDotNet project at
tests/CrestApps.OrchardCore.Benchmarksmeasures the allocation difference across five representative scenarios (system message generation, RAG context, streaming, CSV export, tool summaries). Run withdotnet run -c Release --project tests/CrestApps.OrchardCore.Benchmarks.
Note: Code that passes StringBuilder parameters to DefaultMcpMetadataPromptGenerator.AppendParameterSummary must update to ref Utf16ValueStringBuilder. The Utf16ValueStringBuilder is a disposable struct — always use using var sb = ZString.CreateStringBuilder(); to return pooled buffers.
Performance: AI Chat Session Prompt Storage Separation
- Prompts moved to a dedicated document store —
AIChatSessionPromptobjects are now stored as separate YesSql documents instead of being embedded in theAIChatSessiondocument. This dramatically reduces the data loaded when listing sessions (admin pages, widgets, API), since only session metadata (title, sessionId, profileId, status) is fetched without loading potentially large prompt histories. - Index-only session listings —
DefaultAIChatSessionManager.PageAsyncnow usesQueryIndex<AIChatSessionIndex>instead ofQuery<AIChatSession, AIChatSessionIndex>, returning lightweightAIChatSessionEntryDTOs. This is especially impactful for the admin widget which runs on every admin page request. - New
IAIChatSessionPromptStore— A new store interface (IAIChatSessionPromptStore) providesGetPromptsAsync,DeleteAllPromptsAsync, andCountAsyncoperations for session prompts. - Automatic data migration — Existing sessions are automatically migrated: prompts are extracted from legacy
AIChatSessiondocuments into separateAIChatSessionPromptdocuments via a batched deferred task (50 sessions per batch). No manual intervention is required. ChatMessageCompletedContext.Prompts— The handler context now includes aPromptsproperty with the loaded prompts, soIAIChatSessionHandlerimplementations can access prompts without an additional store query.
Breaking: AIChatSession.Prompts property has been removed. Code that previously accessed session.Prompts must now use IAIChatSessionPromptStore.GetPromptsAsync(sessionId) to load prompts.
Breaking: AIChatSessionPrompt now extends CatalogItem. The Id property has been replaced by ItemId (inherited from CatalogItem), and a SessionId property has been added to associate prompts with their session.
Breaking: AIChatSessionResult.Sessions now returns IEnumerable<AIChatSessionEntry> instead of IEnumerable<AIChatSession>. AIChatSessionEntry is a lightweight DTO containing only SessionId, ProfileId, Title, UserId, ClientId, Status, CreatedUtc, and LastActivityUtc.
Breaking: PostSessionProcessingService.ProcessAsync and DataExtractionService.ProcessAsync now require an IReadOnlyList<AIChatSessionPrompt> parameter. Callers must load prompts from the store and pass them explicitly.
Breaking: AIChatSessionEventService.RecordSessionEndedAsync now takes an int promptCount parameter instead of reading from session.Prompts.Count.
Update: Initial Prompt Session Start Behavior
WelcomeMessage is now treated as UI placeholder text for new chat sessions and is no longer prepended to model conversation history. A new profile option allows adding an initial prompt that creates the first assistant chat-history message immediately when a new session starts.
Fix: Chat Session Handler Lifecycle Events
IAIChatSessionHandler now extends ICatalogEntryHandler<AIChatSession>, adding full lifecycle events: InitializingAsync, InitializedAsync, CreatingAsync, CreatedAsync, LoadedAsync, DeletingAsync, DeletedAsync, UpdatingAsync, UpdatedAsync, ValidatingAsync, and ValidatedAsync. A new AIChatSessionHandlerBase base class (extending CatalogEntryHandlerBase<AIChatSession>) provides virtual no-op implementations for all lifecycle methods plus the existing MessageCompletedAsync, so handler implementations only need to override the events they care about.
DefaultAIChatSessionManager no longer depends on IAIDocumentStore directly. Instead, it invokes DeletingAsync/DeletedAsync lifecycle events on all registered IAIChatSessionHandler implementations when sessions are deleted. Document cleanup is now handled by a dedicated AIChatSessionDocumentCleanupHandler registered in the AI Documents feature, resolving a dependency injection exception when the AI Documents feature was not enabled.
Breaking: IAIChatSessionHandler now inherits from ICatalogEntryHandler<AIChatSession>. Existing implementations should extend AIChatSessionHandlerBase instead of implementing the interface directly to avoid having to implement all lifecycle methods.
Fix: OpenXml Excel Data Extraction
Fixed OpenXmlIngestionDocumentReader.GetCellValue to correctly handle Excel cells stored as inline strings (InlineString cell type) and boolean values. Previously, inline string cells returned empty text because the code only checked CellValue, which is null for inline strings — the text is stored in the InlineString element instead. Shared string table lookup was also changed from LINQ ElementAtOrDefault to direct ChildElements indexer for O(1) access.
Refactor: Unified DataSourceMetadata Type
AIProfileDataSourceMetadata and ChatInteractionDataSourceMetadata have been merged into a single DataSourceMetadata type in CrestApps.OrchardCore.AI.Models. Both types were identical (containing only a DataSourceId property) and served the same purpose on different entity types. Data migrations automatically rename the stored JSON property keys from the legacy names to DataSourceMetadata.
Breaking: AIProfileDataSourceMetadata (from CrestApps.OrchardCore.AI.Core.Models) and ChatInteractionDataSourceMetadata (from CrestApps.OrchardCore.AI.Chat.Interactions) have been removed. Use DataSourceMetadata (from CrestApps.OrchardCore.AI.Models) instead.
Performance: Separated Document Chunk Storage
- Document chunks are now stored as individual records —
AIDocumentno longer stores chunks (with embeddings) inline. Each text chunk is a separateAIDocumentChunkrecord in YesSql, linked byAIDocumentId. This eliminates the SQL Servernvarchar(MAX)overflow problem that occurred with large documents (100+ chunks × 1536-dim embeddings ≈ 1.5–5+ MB of JSON per document). - Embeddings are cached in YesSql — Each
AIDocumentChunkstores itsEmbedding(float[]) alongside theContent. Embeddings are generated once during document processing and reused during vector index rebuilds, avoiding repeated calls to the embedding API. If an embedding is unavailable (e.g., older data migrated before this change), the chunk is indexed without a vector. - New
IAIDocumentChunkStore— ProvidesGetChunksByAIDocumentIdAsync,GetChunksByReferenceAsync, andDeleteByDocumentIdAsyncfor managing chunks independently of the parentAIDocument. - New
AIDocumentChunkIndex— YesSql map index onAIDocumentId,ReferenceId,ReferenceType, andIndexfor efficient chunk queries. - Simplified indexing pipeline — During re-indexing, stored embeddings from
AIDocumentChunk.Embeddingare used directly. The indexing code no longer resolves embedding generators from index profile metadata at indexing time. Embeddings are generated once during document upload/processing. - Document tool updates —
ReadDocumentToolandReadTabularDataToolnow reconstruct full text by querying chunks from the chunk store and concatenating in order, instead of reading aTextproperty.
Breaking: AIDocument.Chunks and AIDocument.Text properties have been removed. Code that previously accessed document.Chunks must now use IAIDocumentChunkStore.GetChunksByAIDocumentIdAsync(documentId). Code that accessed document.Text should concatenate chunks from the store.
Breaking: AIDocumentChunk (the indexing DTO) has been renamed to AIDocumentChunkContext. The name AIDocumentChunk now refers to the new CatalogItem subclass used for chunk storage.
Breaking Changes
Typed AI Deployments
AI deployments are now first-class typed entities. Each deployment has a Type property (Chat, Utility, Embedding, Image, SpeechToText) and an IsDefault flag.
Key changes:
- Connections are pure connections — deployment name fields (
ChatDeploymentName,EmbeddingDeploymentName,UtilityDeploymentName,ImagesDeploymentName) onAIProviderConnectionare deprecated - Typed deployment resolution —
IAIDeploymentManagerresolves deployments by type with fallback chain (explicit → connection default → global default) - Global default deployments — new settings page under Settings > Artificial Intelligence > Default Deployments to configure global defaults for Utility, Embedding, and Image deployments
- AI Profiles —
DeploymentIdrenamed toChatDeploymentId, newUtilityDeploymentIdfield added - Chat Interactions — same changes as AI Profiles
- Deployment dropdowns — now show all deployments grouped by connection for easier selection
- Automatic migration — existing connection deployment names are automatically migrated to typed
AIDeploymentrecords on startup - appsettings.json — both old format (deployment names on connection) and new format (
Deploymentsarray) are supported
Breaking changes:
AIProviderConnection.ChatDeploymentName→ deprecated (use typedAIDeploymentrecords instead)AIProviderConnection.EmbeddingDeploymentName→ deprecatedAIProviderConnection.UtilityDeploymentName→ deprecatedAIProviderConnection.ImagesDeploymentName→ deprecatedAIProvider.DefaultChatDeploymentName→ deprecatedAIProvider.DefaultEmbeddingDeploymentName→ deprecatedAIProvider.DefaultUtilityDeploymentName→ deprecatedAIProvider.DefaultImagesDeploymentName→ deprecatedAIProfile.DeploymentId→ renamed toChatDeploymentId(old JSON property still deserialized for backward compat)ChatInteraction.DeploymentId→ renamed toChatDeploymentIdAICompletionContext.DeploymentId→ renamed toChatDeploymentId
For detailed migration instructions, see the Migrating to Typed AI Deployments guide.
Changed: Navigation Paths
Orchard Core v3 removed the Configuration tab. Update any documentation or code that references:
Configuration → Features→ Use Tools → FeaturesConfiguration → Settings→ Use Settings directly
Changed: Package Version Scheme
Package versions now use the 2.0.0-preview-XXXX scheme instead of 2.0.0-beta-XXXX.
Changed: Renamed OpenAIChatApp Resource to AIChatApp
The frontend resource OpenAIChatApp (CSS and JavaScript) has been renamed to AIChatApp to reflect that the chat UI is provider-agnostic and not specific to OpenAI.
If you reference these resources in custom views or templates, update the resource names:
- <style asp-name="OpenAIChatApp" at="Head"></style>
+ <style asp-name="AIChatApp" at="Head"></style>
- <script asp-name="OpenAIChatApp" at="Foot"></script>
+ <script asp-name="AIChatApp" at="Foot"></script>
Bug Fixes
Document Tools Fail to Resolve Chat Session Documents
Affected features: CrestApps.OrchardCore.AI.Documents.ChatSessions
When documents were uploaded to an AI Chat Session (via the chat widget or session UI), the AI model could not access them because:
- The
DocumentOrchestrationHandlerdid not add session documents to the orchestration context, so the system message never told the model about them. - The document tools (
list_documents,read_document,read_tabular_data,search_documents) only queried by profile ID with theProfilereference type, ignoring session-scoped documents.
What changed:
DocumentOrchestrationHandler.BuildingAsync— Now adds session documents (fromAdditionalProperties["Session"]) tocontext.Context.Documentsalongside profile-level documents, so the system message template lists all available documents.ListDocumentsTool— Now queries both profile-level and session-level documents when the resource is anAIProfile, combining the results.ReadDocumentTool— Now validates document ownership against both the profile ID and the session ID, so documents from either source can be read.ReadTabularDataTool— Previously only supportedChatInteractionresources. Now supportsAIProfileresources and validates against both profile and session reference IDs.SearchDocumentsTool— Now searches across both profile and session vector indexes when both scopes have documents, combining and deduplicating results by score.DocumentPreemptiveRagHandler— Searches across both profile and session scopes. Also removed overly restrictiveHasSessionDocumentsAND condition inCanHandleAsyncthat prevented preemptive RAG forChatInteraction-attached documents.
Upload Failure Errors Not Reported in Chat UI
When a document upload failed (e.g., unsupported file type, processing error), the error was only logged to the browser console. Users had no visual indication that their upload had failed.
What changed:
- The chat widget now displays failed uploads as red error badges in the document bar, showing the file name and error message.
- Users can dismiss individual error badges by clicking the close button.
- HTTP-level upload failures and network errors are also surfaced to the user.
Chat Feedback Metrics Use Majority Voting Instead of Per-Message Counts
Affected features: CrestApps.OrchardCore.AI.Chat, CrestApps.OrchardCore.AI.Agent
Each chat message can receive individual thumbs up or thumbs down feedback, but the session-level analytics aggregated all per-message ratings into a single boolean (UserRating) using majority voting. This caused inaccurate reporting—for example, a session with 3 thumbs up and 2 thumbs down would report a single "positive" rating instead of the actual counts.
What changed:
AIChatSessionEvent— AddedThumbsUpCountandThumbsDownCount(int) properties to store per-session counts alongside the legacyUserRatingfield for backward compatibility.AIChatSessionMetricsIndex— Added corresponding index columns with anUpdateFrom2Asyncmigration.AIChatHub.RateMessage— Now computes actual thumbs up/down counts from the per-message ratings and passes them to the event service, instead of using majority voting.AIChatAnalyticsFeedbackDisplayDriver— UsesSum()of count fields across sessions instead of counting sessions with positive/negative rating.QueryChatSessionMetricsTool— UsesSum()of count fields for accurate analytics reporting.ChatAnalyticsFeedback.cshtml— Updated tooltip text to clarify these are message-level rating counts.
Post-Session Tasks Silently Fail With No Retry
Affected features: CrestApps.OrchardCore.AI
When a chat session closed, post-session tasks (data extraction, email sending) ran inline and if the AI service was unavailable, the error was caught, logged, and the task was permanently lost. There was no retry mechanism.
What changed:
AIChatSession— AddedPostSessionProcessingStatus(enum: None, Pending, Completed, Failed),PostSessionProcessingAttempts(int), andPostSessionProcessingLastAttemptUtc(DateTime?) fields for tracking.AIChatSessionIndex— AddedPostSessionProcessingStatuscolumn with anUpdateFrom2Asyncmigration.PostSessionProcessingChatSessionHandler— Now marks sessions asPendingbefore attempting processing, thenCompletedon success. On failure, status staysPendingso the background task can retry.AIChatSessionCloseBackgroundTask— Rewritten with two-phase processing: Phase 1 closes inactive sessions, Phase 2 retries pending post-session tasks with a maximum of 3 attempts and a 5-minute retry delay. Sessions exceeding max attempts are markedFailed.
FunctionInvocationMetadata Not Read From Legacy Profile Property Key
Affected features: CrestApps.OrchardCore.AI
Profiles created in earlier versions stored tool selections under the JSON property key AIProfileFunctionInvocationMetadata. After the model was renamed to FunctionInvocationMetadata, the tool resolution code only checked the new key name, silently ignoring tool selections saved under the legacy key.
What changed:
AIProfileCompletionContextBuilderHandler— Now falls back to reading from the legacyAIProfileFunctionInvocationMetadataproperty key when the currentFunctionInvocationMetadatakey has no tool names.AIProfileToolsDisplayDriver— The edit view reads from both keys for backward compatibility, correctly pre-selecting tools saved under the legacy key. On save, the legacy key is removed and data is written only under the current key, completing the migration.
Post-Close Processing Not Resilient for Analytics and Conversion Goals
Affected features: CrestApps.OrchardCore.AI
When a chat session closed, analytics recording (resolution detection and conversion goal evaluation) ran as a separate step outside the retry pipeline. If the AI service was unavailable at that moment, analytics were permanently lost with no retry. Additionally, post-session tasks that returned empty results were incorrectly left in Pending status instead of being marked as completed. Conversion goals were evaluated inside the analytics step, so a failure in either would prevent both from completing.
What changed:
AIChatSession— AddedIsPostSessionTasksProcessed,IsAnalyticsRecorded, andIsConversionGoalsEvaluatedboolean flags to independently track which processing steps have completed, enabling partial-completion-aware retries.AIChatSessionCloseBackgroundTask— Refactored into three independent resilient steps: (1) post-session tasks, (2) session analytics recording, and (3) conversion goal evaluation. Each step is tracked and retried independently. The retry loop only re-runs steps that haven't completed yet. Overall status is markedCompletedonly when all applicable steps succeed. Comprehensive structured logging added at every step boundary (start, complete, skip, fail, retry) for production debugging.PostSessionProcessingChatSessionHandler— Fixed a bug where successful processing that returned empty results left the session stuck inPendingstatus. Now correctly marksIsPostSessionTasksProcessed = trueregardless of whether the AI produced output.PostSessionProcessingService— Removed internal try/catch blocks fromProcessAsync,EvaluateResolutionAsync, andEvaluateConversionGoalsAsync. Previously, exceptions (e.g., AI model returning non-JSON responses) were caught internally and masked as empty results, preventing the retry pipeline from detecting failures. Exceptions now propagate to callers, allowing the background task's retry mechanism to work correctly.PostSessionProcessingService.ProcessAsync— Fixed tool execution failing when post-session tasks require AI tools (e.g.,sendEmail). The method usedGetResponseAsync<T>(structured JSON output) which conflicts with tool calling — the model was forced to produce JSON instead of executing tool calls. When tools are configured, the method now uses non-genericGetResponseAsyncto allow tool execution, then attempts to parse structured results from the response text. Additionally, the rawIChatClientfromGetChatClientAsynclackedFunctionInvokingChatClientmiddleware, so tool_call messages returned by the model were never actually executed. The client is now wrapped with.AsBuilder().UseFunctionInvocation().Build()to enable tool execution.PostSessionProcessingService— Prompt Template Refactoring — All hardcoded prompt-building methods (BuildProcessingPrompt,BuildConversationTranscript, and inline goal prompt construction) have been replaced with Liquid-based AI prompt templates. Three new user prompt templates were added:post-session-analysis-prompt,resolution-analysis-prompt, andconversion-goal-evaluation-prompt. These templates use Liquid{% for %}loops to dynamically render tasks, goals, and conversation transcripts, making prompts configurable and maintainable without code changes.
Post-Session Tasks Stuck in Pending Status After Processing
Affected features: CrestApps.OrchardCore.AI
When a post-session task (e.g., summary) was configured with tool capabilities (e.g., sendEmail), the AI model might not return structured JSON results even though the tool executed. In this case, ProcessAsync returned null and the task remained in Pending status indefinitely—it was never marked as Failed or Succeeded. Additionally, ProcessedAtUtc was never set for tasks that stayed pending, and the exception handler used DateTime.UtcNow instead of IClock.
What changed:
PostSessionResult— AddedAttemptsproperty (int) to track how many times each individual task has been processed. This enables per-task retry logic independent of the session-level retry counter.PostSessionProcessingService.ProcessWithToolsAsync— Fixed the root cause where the method returnednullwhen the AI model's response was not strict JSON, discarding valid results. The method now uses a multi-strategy response parser: (1) direct JSON deserialization, (2) JSON extraction from markdown code fences, (3) JSON object extraction from surrounding text. When all JSON parsing strategies fail and there is a single semantic task, the response text is used directly as the task value. Additionally, the response text is now extracted from the last assistant message with text content, skipping intermediate tool call and tool result messages (e.g., "Email sent successfully") that could be mistakenly picked up as the model's final output. Added comprehensive debug logging of the raw AI response text and each parsing strategy attempted.PostSessionProcessingService.ApplyResults— Added per-task debug logging for applied results and skipped results to aid troubleshooting.AIChatSessionCloseBackgroundTask.RunPostSessionTasksAsync— Before callingProcessAsync, the method now incrementsAttemptsfor each non-succeeded task. After processing, tasks that produced no result and have exhausted all retry attempts (3) are permanently marked asFailedwithProcessedAtUtcset viaIClock. Tasks below the attempt limit remainPendingfor the next retry cycle. The completion check (IsPostSessionTasksProcessed) now considers tasks fully processed when all are eitherSucceededor permanentlyFailed, allowing session-level processing to complete even when individual tasks fail.AIChatSessionCloseBackgroundTask(catch block) — FixedDateTime.UtcNowusage to useIClock.UtcNow. On exception, the error message is recorded on all non-succeeded tasks, but only tasks that have reached the maximum attempt count are marked asFailed. Tasks with remaining attempts stayPendingfor retry.
Feedback Progress Bar Icon and Percentage Misaligned
Affected features: CrestApps.OrchardCore.AI.Chat
In the User Feedback analytics card, the thumbs icon appeared at the left edge of the progress bar segment while the percentage text was centered, making the layout look off-balance.
What changed:
ChatAnalyticsFeedback.cshtml— Progress bar segments now use flexbox (d-flex align-items-center justify-content-center gap-1) to center both the icon and percentage as a group, with horizontal padding for better spacing.
JSON.parse Error on AI Profile and Chat Interaction Edit Pages
Affected features: CrestApps.OrchardCore.AI.Prompting
When editing an AI Profile or Chat Interaction that uses prompt templates, the browser threw SyntaxError: JSON.parse: unexpected end of data at line 1 column 2 because Html.Raw() was used inside HTML data-* attributes. The JSON double-quote characters broke the attribute boundary, truncating the JSON to just {.
What changed:
AIProfilePromptSelection.Edit.cshtml— RemovedHtml.Raw()wrapper fromdata-jsonanddata-paramsattributes. Razor's default HTML encoding produces"which the browser'sdatasetAPI automatically decodes, soJSON.parse()receives valid JSON.ChatInteractionPromptSelection.Edit.cshtml— Same fix applied.
SignalR Reconnection Loses Initial Chat Session
Affected features: CrestApps.OrchardCore.AI.Chat
When the SignalR WebSocket connection dropped during initial connection establishment (observed in production as a sub-second disconnect/reconnect cycle), the StartSession hub invocation was lost. The onreconnected handler only logged a message but did not retry session creation, leaving the chat widget with no session and no initial prompts displayed.
What changed:
ai-chat.js— Theonreconnectedhandler now checks whether a session was already established. If the session was started, it reloads the current session to restore state. If no session was started andautoCreateSessionis enabled, it retriesstartNewSession()to recover from the lost initial invocation.
AI Template Service Warning Logging
Affected features: CrestApps.OrchardCore.AI.Prompting
When an AI template ID was requested but not found (e.g., due to a typo or a disabled feature), the service returned null silently with no diagnostic output, making it difficult to troubleshoot missing prompt templates in production.
What changed:
OrchardCoreAITemplateService— Now logs a warning with the requested template ID and all available template IDs when a template lookup fails. This helps diagnose template resolution issues in production logs.
Profile-Attached Documents Not Indexed in Vector Search
Affected features: CrestApps.OrchardCore.AI.Documents.Profiles
Documents uploaded to an AI Profile's knowledge base (via the AI Profile editor in the admin dashboard) were saved to the YesSql document store but never pushed to the vector search index. This meant the preemptive RAG pipeline and search_documents tool could not find profile-attached documents, effectively making the knowledge base non-functional for profiles.
The same indexing logic already worked correctly for chat session documents (uploaded via the session endpoint), because UploadSessionDocumentEndpoint scheduled a deferred task to push document chunks to all AI document index profiles. The profile driver (AIProfileDocumentsDisplayDriver) was missing this step entirely.
What changed:
AIProfileDocumentsDisplayDriver.UpdateAsync— After processing and storing new documents, the driver now schedules a deferred task (ShellScope.AddDeferredTask) that pushes document chunks to all configured AI document vector search index profiles, following the exact same pattern used byUploadSessionDocumentEndpoint.IndexDocumentChunksAsync.AIProfileDocumentsDisplayDriver.UpdateAsync— When existing documents are removed, the driver now also schedules a deferred task to remove the corresponding chunks from all vector search index profiles, matching theRemoveSessionDocumentEndpoint.RemoveDocumentChunksAsyncpattern.- Added
IndexDocumentChunksAsyncandRemoveDocumentChunksAsyncstatic methods to the driver for deferred task execution.
Enhanced Debug Logging for AI Document Processing and Orchestration
Affected features: CrestApps.OrchardCore.AI, CrestApps.OrchardCore.AI.Documents
Production debugging of document processing, RAG retrieval, and system message composition was difficult because there was insufficient logging at key decision points. Added comprehensive LogLevel.Debug logging throughout the pipeline to enable end-to-end traceability.
What changed:
DefaultOrchestrationContextBuilder— Logs the final composed system message length, content preview, and resource type after all orchestration handlers have run, immediately before handing off to the AI model.DocumentOrchestrationHandler— AddedILoggerinjection. Logs the number of documents populated from chat interactions and AI profiles during theBuildingAsyncphase, and logs when no documents are found.DocumentPreemptiveRagHandler— Logs at every decision point: index profile lookup result, vector search service availability, embedding configuration completeness, resolved search scopes (with resource IDs and reference types), query count, topN setting, and search result count.DefaultAIDocumentProcessingService— Logs text extraction result length, chunk count after normalization, embedding generation success, and reasons for skipping embedding generation (unsupported extension, empty text, no generator).AIProfileDocumentsDisplayDriver— Logs when vector index push and chunk removal tasks are scheduled.
All debug log statements are guarded with _logger.IsEnabled(LogLevel.Debug) to avoid argument evaluation overhead when debug logging is disabled.
AI Templates Not Discovered in NuGet/Docker Deployments
Affected features: CrestApps.OrchardCore.AI.Prompting
AI prompt templates (.md files in AITemplates/Prompts/ directories) were not discovered at runtime when modules were deployed via NuGet packages (e.g., in Docker containers). The ModuleAITemplateProvider relied on IHostEnvironment.ContentRootFileProvider.GetDirectoryContents() to enumerate template files, which only finds physical files on disk. In NuGet deployments, templates are embedded as assembly resources by OrchardCore's Module Targets, but the ContentRootFileProvider-based discovery and the AppDomain.CurrentDomain.GetAssemblies() fallback were both unreliable in containerized environments.
What changed:
ModuleAITemplateProvider— Rewritten to useIApplicationContext.Application.GetModule(), following the same pattern as OrchardCore'sModuleEmbeddedFileProvider. The provider now enumerates each module'sAssetPathsto find templates matching theAITemplates/Prompts/convention, then reads file contents viaModule.GetFileInfo()which resolves embedded resources directly from the assembly. This approach works reliably in both development (project references) and production (NuGet/Docker) environments without depending onIHostEnvironment.ContentRootFileProvidertiming orAppDomain.CurrentDomain.GetAssemblies()assembly loading order.EmbeddedResourceAITemplateProvider— Now handles both standard.separators and OrchardCore's>path separators in embedded resource logical names, making it compatible with assemblies built usingOrchardCore.Module.Targets.
Initial Prompt Not Displayed on New Chat Sessions
Affected features: CrestApps.OrchardCore.AI.Chat
When an AI Profile had an InitialPrompt configured, new chat sessions displayed a blank white page instead of triggering the initial prompt. The view correctly set autoCreateSession: true and hid the placeholder, but the session was created with zero messages. Since the LoadSession payload contained no messages and the placeholder was hidden, users saw an empty chat area.
What changed:
ai-chat.js— TheLoadSessionhandler now checks if the session is new (zero messages) and aninitialPromptis configured. When both conditions are met, it automatically sends the initial prompt as the first user message, triggering an AI response.Widget-AIChat.cshtml,AIChatAdminWidget.cshtml,AIChatSessionChat.cshtml— All chat views now pass theinitialPrompttext in the JavaScript configuration object (JSON-encoded viaJsonSerializer.Serialize) so the client-side code can auto-send it.
SendEmail Tool Crashes When Invoked From Background Task
Affected features: CrestApps.OrchardCore.AI.Agent
When a chat session was closed by the AIChatSessionCloseBackgroundTask (inactivity timeout), post-session tasks that invoke the sendEmail tool failed silently. The SendEmailTool.InvokeCoreAsync method accessed httpContextAccessor.HttpContext.User without null-checking HttpContext, which is null in background task contexts (no HTTP request). This caused a NullReferenceException that was caught by FunctionInvokingChatClient, retried up to 3 times (all failing), and ultimately resulted in the AI model returning non-JSON text that could not be parsed — producing 0 post-session results with no visible error.
Sessions closed inline during an HTTP request (via PostSessionProcessingChatSessionHandler) were not affected because HttpContext is available in that path.
What changed:
SendEmailTool.InvokeCoreAsync— ChangedhttpContextAccessor.HttpContext.UsertohttpContextAccessor.HttpContext?.Userwith a null guard. WhenHttpContextis null (background task), the email is sent without a sender address (uses the system default), instead of crashing.GetContentItemLinkTool.InvokeCoreAsync— Added nullHttpContextguard. When invoked from a background task, returns a descriptive fallback message instead of crashing.CreateOrUpdateContentTool.InvokeCoreAsync— Added nullHttpContextguard. Skips URI generation when running in a background context and logs a debug warning.PostSessionProcessingService— Added comprehensiveLogLevel.Debuglogging throughout the processing pipeline: tool resolution results (success/failure per tool), code path selection (tools vs structured output), AI response details (message count, tool call count, tool result count), and JSON parse failures with truncated response text. All debug statements are guarded with_logger.IsEnabled(LogLevel.Debug).
Per-Task Post-Session Result Tracking
Affected features: CrestApps.OrchardCore.AI, CrestApps.OrchardCore.AI.Chat
Previously, post-session processing was all-or-nothing: all tasks were sent to the AI model in a single request, and if any task (or the request itself) failed, all tasks were lost and retried together on the next attempt. There was no visibility into which individual tasks succeeded or failed.
What changed:
PostSessionTaskResultStatus— New enum (Pending,Succeeded,Failed) for tracking individual task outcomes.PostSessionResult— Extended withStatus(PostSessionTaskResultStatus) andErrorMessagefields. Each result now records whether the task succeeded, failed, or is still pending.PostSessionProcessingService.ProcessAsync— Now reads the session's existingPostSessionResultsand filters out tasks that have already succeeded. Only tasks withPendingorFailedstatus are included in the AI prompt. On success, returned results are marked withStatus = Succeeded.AIChatSessionCloseBackgroundTask.RunPostSessionTasksAsync— Completely rewritten for per-task tracking. InitializesPendingentries for all configured tasks, merges results from AI processing, and marks tasks asFailedwith error messages on exceptions.IsPostSessionTasksProcessedis only set totruewhen all tasks haveStatus = Succeeded.PostSessionProcessingChatSessionHandler.MessageCompletedAsync— Rewritten with the same per-task tracking and merge logic as the background task, ensuring consistent behavior between inline and background processing paths.
This means: if a profile has 3 post-session tasks and task 2 fails, tasks 1 and 3 are preserved as Succeeded and only task 2 is retried on the next processing attempt. The PostSessionResults dictionary now provides full per-task visibility for debugging and monitoring.
AI Resolution Detection Decoupled From Session Metrics
Affected features: CrestApps.OrchardCore.AI, CrestApps.OrchardCore.AI.Chat
The Enable AI Resolution Detection checkbox was previously nested inside the session metrics settings section and only visible when Enable Session Metrics was checked. However, resolution detection enhances session closing logic (determining whether a user's query was resolved) and is useful independently of metrics collection.
What changed:
AIProfileAnalytics.Edit.cshtml— Moved the "Enable AI Resolution Detection" checkbox outside the session metrics wrapper so it is always visible regardless of metrics toggle state. Updated hint text to clarify independent operation.AIChatSessionCloseBackgroundTask— Updated theneedsAnalyticsandanalyticsCompleteconditions to considerEnableAIResolutionDetectionindependently ofEnableSessionMetrics. Resolution detection now runs even when session metrics are disabled.
Migration Guide from v1.x
Step 1: Update Package References
Update all CrestApps package references to 2.0.0-preview-0001 or later:
<PackageReference Include="CrestApps.OrchardCore.Cms.Core.Targets" Version="2.0.0-preview-0001" />
Step 2: Remove Prompt Routing Code
If you used AddPromptProcessingIntent, IPromptIntentDetector, or IPromptProcessingStrategy, remove these references. The orchestrator now handles all request processing automatically.
Step 3: Update Tool Registrations
If you registered custom AI tools, update to the new fluent API:
// Old (v1.x)
// services.AddSingleton<AIFunction, MyTool>();
// New (v2.0)
services.AddAITool<MyTool>(MyTool.TheName)
.WithTitle("My Tool")
.WithDescription("Description for the orchestrator")
.Selectable();
Step 4: Enable New Features
New modules are not enabled by default. Enable them via Tools → Features in the admin dashboard as needed.
Enhancements
AI Chat Session Analytics
- AI Resolution Detection: Sessions are now analyzed by AI to determine whether the user's query was semantically resolved, replacing the timeout-only abandonment detection that produced 100% false-positive abandonment rates. Enabled by default when session metrics are active. Configurable per-profile via the Enable AI Resolution Detection checkbox.
- Conversion Metrics: New goal-based conversion rate system. Define custom goals per profile with configurable scoring ranges (default 0–10). After session close, AI evaluates the conversation against each goal and produces per-goal scores with reasoning. Aggregate conversion metrics are stored on the session event and available in the analytics dashboard and
queryChatSessionMetricstool. - Resolution & Conversion Dashboard Card: New analytics dashboard section displaying resolution rate, resolved/unresolved session counts, average conversion score, high/low performing session percentages, and an overall conversion progress bar.
- Enriched Workflow Events: All chat session workflow events (
AIChatSessionClosedEvent,AIChatSessionPostProcessedEvent,AIChatSessionFieldExtractedEvent) now include fullSessionandProfileobjects in the input dictionary, not just IDs. - All Fields Extracted Event: New
AIChatSessionAllFieldsExtractedEventworkflow event that triggers once when all configured data extraction fields have been collected for a session. - ExtractedFieldChange Liquid Access:
ExtractedFieldChangeandConversionGoalResulttypes are now registered in the LiquidMemberAccessStrategyfor use in workflow expressions.
AI Profile Storage
- Per-Document Storage: AI Profiles are now stored as individual YesSql documents instead of a single
DictionaryDocument<AIProfile>. This eliminates SQL Servernvarchar(max)size limits that could break with large numbers of profiles, making the storage layer more scalable and robust. - AIProfileIndex: A new
AIProfileIndexMapIndex provides efficient querying byName,Type,ConnectionName,DeploymentId,OrchestratorName,OwnerId,Author, andIsListablecolumns. This replaces the previous in-memory filtering approach. - Automatic Data Migration: Existing tenants with profiles stored in the legacy
DictionaryDocumentformat are automatically migrated to individual documents on startup. No manual intervention is required. - DefaultAIProfileStore: Now extends
NamedSourceDocumentCatalog<AIProfile, AIProfileIndex>(YesSql per-document pattern) instead ofNamedCatalog<AIProfile>(single-document pattern). The store is registered forICatalog<AIProfile>,ISourceCatalog<AIProfile>,INamedCatalog<AIProfile>, andINamedSourceCatalog<AIProfile>.
Post-Session Processing
- Tool Capabilities: Post-session processing now supports AI tool invocation. A new Capabilities tab in the post-session configuration allows selecting AI tools to make available during analysis. The AI model can invoke these tools during post-session processing for actions beyond text generation.
- Per-Task Result Tracking: Each post-session task is now tracked individually with
Pending,Succeeded, orFailedstatus. Tasks that succeed are preserved across retries — only tasks that haven't succeeded are retried. ThePostSessionResultsdictionary provides full per-task visibility including error messages for failed tasks. - Removed Token Limit: The
MaxOutputTokens = 2048limit has been removed from post-session processing requests, allowing the AI model to use its default output limits for more thorough analysis.
AI Resolution Detection
- Independent of Session Metrics: The "Enable AI Resolution Detection" setting is now always visible and operates independently of the "Enable Session Metrics" toggle. Resolution detection enhances session closing logic beyond just metrics collection.
Conversion Metrics
- Independent of Session Metrics: The "Enable Conversion Metrics" setting and its associated goals configuration are now always visible and operate independently of the "Enable Session Metrics" toggle. Conversion goals can be defined and evaluated without enabling session metrics.
Typed AI Deployments
- First-Class Typed Deployments:
AIDeploymentis now a typed entity with aTypeproperty (Chat,Utility,Embedding,Image,SpeechToText) and anIsDefaultboolean. This replaces the old pattern of storing deployment names directly on provider connections. - Pure Connections:
AIProviderConnectionno longer carries deployment name fields (ChatDeploymentName,UtilityDeploymentName,EmbeddingDeploymentName,ImagesDeploymentName). Connections are now pure connection configurations. These fields are deprecated and auto-migrated to the newDeploymentsarray format. - Default AI Deployment Settings: A new settings page under Settings → Artificial Intelligence → Default AI Deployment Settings allows configuring global defaults:
DefaultUtilityDeploymentId,DefaultEmbeddingDeploymentId,DefaultImageDeploymentId. - Deployment Resolution Fallback: The system resolves deployments using a fallback chain: explicit assignment → connection default for type → global default → null/error.
- AI Profile & Chat Interaction Fields:
ChatDeploymentIdandUtilityDeploymentIdreplace the singleDeploymentIdfield on AI Profiles and Chat Interactions. - Grouped Deployment UI: UI dropdowns now show deployments grouped by connection instead of the previous cascading connection → deployment selection.
- Configuration Format:
appsettings.jsonnow uses aDeploymentsarray on each connection. Provider-levelDefaultChatDeploymentName,DefaultUtilityDeploymentName,DefaultEmbeddingDeploymentName,DefaultImagesDeploymentNameare deprecated.
Agentic Framework
- Agent Profile Type: A new
Agentvalue has been added toAIProfileType. Agent profiles are reusable, composable agents that are automatically exposed as AI tools via the tool registry. - Agent-as-Tool: Each Agent profile is dynamically registered as an AI tool by
AgentToolRegistryProvider. Other profiles can invoke agents during orchestration, and the AI model determines which agent to call based on the agent's description. - Agent Selection UI: AI Profiles, AI Templates, and Chat Interactions now include an Agents section under the Capabilities tab, displayed before Tools. Agents are shown as checkboxes, separate from the Tools section for clarity.
- Agent Availability Modes: Agents support two availability modes via
AgentMetadata.Availability:- On demand (default): Agents are only included in AI requests when matched by semantic or keyword relevance scoring. Users select on-demand agents from the Capabilities tab.
- Always available: Agents are automatically included in every completion request. These agents do not appear in the Capabilities tab checkbox lists since they are always active. A warning is shown when selecting this mode to inform about increased token usage.
- Description Field:
AIProfilenow has aDescriptionproperty, required for Agent profiles. The description is used as the tool description when the agent is exposed as an AI tool. - AgentNames on Context:
AICompletionContext,ChatInteraction, andProfileTemplateMetadatanow includeAgentNamesfor selecting which agents to include. - Built-in Agent Templates: Eight agent templates are included out of the box: Planner Agent (structured task planning), Research Agent (information gathering and synthesis), Executor Agent (step-by-step plan execution), Writer Agent (content drafting and polishing), Reviewer Agent (critical review and feedback), Data Analyst Agent (data analysis and insights), Summarizer Agent (content condensation), and Code Assistant Agent (software development assistance).
- Template Support:
ModuleAIProfileTemplateProviderandAppDataAIProfileTemplateProvidernow supportAgentNames,ProfileDescription, andAgentAvailabilityin template front matter metadata. - AgentProxyTool: The agent proxy tool parameter has been renamed from
tasktopromptto align with standard agentic framework terminology. - Agent Context Injection: The new
AgentOrchestrationContextBuilderHandlerenriches the system message with descriptions of available agents. This follows the industry-standard pattern (OpenAI, LangChain, CrewAI) where capability descriptions are included in the system prompt so the model can make informed routing decisions. Agent descriptions are lightweight (~50 tokens per agent) and only included when agents are configured for the profile. - Task Planning with Agents: The task planning prompt template now includes Agent as a source type. When agents are available, the planner lists them first (before other tools) with instructions to prefer delegating to agents for complex subtasks.
- Capabilities Tab Ordering: The Capabilities tab now orders sections as: MCP Connections → Agents → Tools (previously MCP → Tools → Agents).