AI Services
| Feature Name | AI Services |
| Feature ID | CrestApps.OrchardCore.AI |
Provides AI services. (EnabledByDependencyOnly = true)
AI Services Feature
The AI Services feature provides the foundational infrastructure for interacting with AI models through configurable profiles and service integrations.
Once enabled, a new Artificial Intelligence menu item appears in the admin dashboard, allowing administrators to create and manage AI Profiles.
An AI Profile defines how the AI system interacts with users — including its welcome message, system message, and response behavior.
This feature does not include any AI completion client implementations such as OpenAI. It only provides the user interface and core services for managing AI profiles. You must install and configure a compatible provider module (e.g., OpenAI, Azure, AzureAIInference, or Ollama) separately.
Data Extraction (AI Profiles)
AI Profiles can be configured to extract structured data from the chat session as the conversation progresses (for example: name, email, product of interest, budget, meeting time, etc.).
To configure this, edit an AI Profile in the admin UI and open the Data Extractions tab (added by AIProfileDataExtractionDisplayDriver).
Configuration
- Go to Artificial Intelligence → AI Profiles and edit a profile.
- Open the Data Extractions tab.
- Check Enable Data Extraction.
- Configure:
- Extraction Check Interval — run extraction every N user messages (default: 1).
- Session Inactivity Timeout (minutes) — sessions inactive longer than this are automatically closed and a final extraction is run (default: 30).
- Extraction Entries — the fields to extract:
- Name — a unique key (letters, digits, underscore), e.g.
customer_name. - Description — what to extract (this is what the model uses as guidance).
- Allow Multiple Values — accumulate multiple values over time (e.g., multiple mentioned products).
- Updatable — allow replacing the previous value if the user corrects it later.
- Name — a unique key (letters, digits, underscore), e.g.
Extracted values are stored on the chat session (ExtractedData) and are updated after each qualifying message exchange. The extraction model uses the profile's Utility deployment when configured, and falls back to the chat deployment otherwise.
Configuration
Before using the AI Services feature, ensure the required settings are properly configured. This can be done through the appsettings.json file or other configuration sources.
Below is an example configuration:
{
"OrchardCore": {
"CrestApps_AI": {
"DefaultParameters": {
"Temperature": 0,
"MaxOutputTokens": 800,
"TopP": 1,
"FrequencyPenalty": 0,
"PresencePenalty": 0,
"PastMessagesCount": 10,
"MaximumIterationsPerRequest": 10,
"EnableOpenTelemetry": false,
"EnableDistributedCaching": true
},
"Providers": {
"<!-- Provider name goes here (valid values: 'OpenAI', 'Azure', 'AzureAIInference', or 'Ollama') -->": {
"DefaultConnectionName": "<!-- The default connection name to use from the Connections list -->",
"Connections": {
"<!-- Connection name goes here -->": {
// Provider-specific settings go here (e.g., ApiKey, Endpoint)
"Deployments": [
{ "Name": "<!-- model name -->", "Type": "Chat", "IsDefault": true },
{ "Name": "<!-- lightweight model name -->", "Type": "Utility", "IsDefault": true },
{ "Name": "<!-- embedding model name -->", "Type": "Embedding", "IsDefault": true },
{ "Name": "<!-- image model name -->", "Type": "Image", "IsDefault": true }
]
}
}
}
}
}
}
}
The following configuration format using ChatDeploymentName, UtilityDeploymentName, EmbeddingDeploymentName, and ImagesDeploymentName at both the provider and connection levels is deprecated. It is still supported and will be auto-migrated at runtime, but new configurations should use the Deployments array format shown above.
{
"Providers": {
"OpenAI": {
"DefaultChatDeploymentName": "gpt-4o",
"DefaultUtilityDeploymentName": "gpt-4o-mini",
"DefaultEmbeddingDeploymentName": "text-embedding-3-large",
"DefaultImagesDeploymentName": "dall-e-3",
"Connections": {
"openai-cloud": {
"ChatDeploymentName": "gpt-4o",
"UtilityDeploymentName": "gpt-4o-mini",
"EmbeddingDeploymentName": "text-embedding-3-large",
"ImagesDeploymentName": "dall-e-3"
}
}
}
}
}
Default Parameters
| Setting | Description | Default |
|---|---|---|
Temperature | Controls randomness. Lower values produce more deterministic results. | 0 |
MaxOutputTokens | Maximum number of tokens in the response. | 800 |
TopP | Controls diversity via nucleus sampling. | 1 |
FrequencyPenalty | Reduces repetition of token sequences. | 0 |
PresencePenalty | Encourages the model to explore new topics. | 0 |
PastMessagesCount | Number of previous messages included as conversation context. | 10 |
MaximumIterationsPerRequest | Maximum number of tool-call round-trips the model can make per request. Set to a higher value (e.g., 10) to enable agentic behavior where the model can call tools, evaluate results, and call additional tools as needed. A value of 1 limits the model to a single tool call with no follow-up. | 10 |
EnableOpenTelemetry | Enables OpenTelemetry tracing for AI requests. | false |
EnableDistributedCaching | Enables distributed caching for AI responses. | true |
Typed AI Deployments
Each deployment is a first-class entity with a Type and an optional IsDefault flag. Deployments are defined in the Deployments array on each connection.
| Property | Description | Required |
|---|---|---|
Name | The model/deployment name (e.g., gpt-4o, text-embedding-3-large) | Yes |
Type | The deployment type. Valid values: Chat, Utility, Embedding, Image, SpeechToText | Yes |
IsDefault | Whether this is the default deployment for its type within the connection | No |
Deployment Types:
| Type | Purpose | Example Models |
|---|---|---|
Chat | Primary chat completions | gpt-4o, gemini-pro, deepseek-chat |
Utility | Lightweight auxiliary tasks (query rewriting, planning, chart generation). Falls back to Chat when not set. | gpt-4o-mini, gemini-flash |
Embedding | Generating embeddings for RAG / vector search | text-embedding-3-large, text-embedding-3-small |
Image | Image generation | dall-e-3, dall-e-2 |
SpeechToText | Speech-to-text transcription | whisper-1 |
Deployment Resolution
When an AI Profile or service requests a deployment, the system resolves it using the following fallback chain:
- Explicit deployment — The deployment explicitly assigned to the profile/resource
- Connection default for type — The deployment marked
IsDefault: truefor that type on the connection - Global default — The default deployment configured in Default AI Deployment Settings (see below)
- null/error — No deployment found
Default AI Deployment Settings
A new settings page is available under Settings → Artificial Intelligence → Default AI Deployment Settings. This page allows administrators to configure global default deployments:
| Setting | Description |
|---|---|
DefaultUtilityDeploymentId | The global default deployment for utility tasks |
DefaultEmbeddingDeploymentId | The global default deployment for embedding generation |
DefaultImageDeploymentId | The global default deployment for image generation |
These global defaults act as the final fallback when no explicit or connection-level default is configured.
Chat deployments do not need a global default because they are always explicitly set on AI Profiles or Chat Interactions.
The following provider-level and connection-level settings are deprecated and will be auto-migrated:
DefaultChatDeploymentName,DefaultUtilityDeploymentName,DefaultEmbeddingDeploymentName,DefaultImagesDeploymentName(provider-level)ChatDeploymentName,UtilityDeploymentName,EmbeddingDeploymentName,ImagesDeploymentName(connection-level)
Use the Deployments array on connections and the Default AI Deployment Settings page instead.
Provider Configuration
The following providers are supported out of the box:
- OpenAI — View configuration guide
- Azure — View configuration guide
- AzureAIInference — View configuration guide
- Ollama — View configuration guide
Tip: Most modern AI providers offer APIs that follow the OpenAI API standard. For these providers, use the
OpenAIprovider type when configuring their connections and endpoints.
Each provider can define multiple connections, and the DefaultConnectionName determines which one is used when multiple connections are available.
Microsoft.AI.Extensions
The AI module is built on top of Microsoft.Extensions.AI, making it easy to integrate AI services into your application. We provide the IAIClientFactory service, which allows you to easily create standard services such as IChatClient, IEmbeddingGenerator and IImageGenerator for any of your configured providers and connections.
Simply inject IAIClientFactory into your service and use the CreateChatClientAsync or CreateEmbeddingGeneratorAsync methods to obtain the required client.
AI Deployments Feature
| Feature Name | AI Deployments |
| Feature ID | CrestApps.OrchardCore.AI.Deployments |
Manages typed AI model deployments.
The AI Deployments feature extends the AI Services feature by enabling AI model deployment capabilities. Each deployment is a first-class entity with a Type property (Chat, Utility, Embedding, Image, SpeechToText) and an IsDefault flag. Deployments are associated with a provider connection and can be managed through the admin UI under Artificial Intelligence → Deployments.
UI dropdowns for deployment selection display deployments grouped by connection, making it easy to find the correct deployment without navigating a cascading connection → deployment hierarchy.
AI Chat Services Feature
| Feature Name | AI Chat Services |
| Feature ID | CrestApps.OrchardCore.AI.Chat.Core |
Provides all the necessary services to enable chatting with AI models using profiles. (EnabledByDependencyOnly = true)
The AI Chat Services feature builds upon the AI Services feature by adding AI chat capabilities. This feature is enabled on demand by other modules that provide AI completion clients.
AI Chat WebAPI
| Feature Name | AI Chat WebAPI |
| Feature ID | CrestApps.OrchardCore.AI.Chat.Api |
Provides a RESTful API for interacting with the AI chat.
The AI Chat WebAPI feature extends the AI Chat Services feature by enabling a REST WebAPI endpoints to allow you to interact with the models.
AI Connection Management
| Feature Name | AI Connection Management |
| Feature ID | CrestApps.OrchardCore.AI.ConnectionManagement |
Provides user interface to manage AI connections.
The AI Connection Management feature enhances AI Services by providing a user interface to manage provider connections. Connections are pure connection configurations — they define how to reach a provider (endpoint, API key, authentication). Deployments (models) are managed separately as typed entities associated with each connection.
Setting Up a Connection
-
Navigate to AI Settings
- Go to "Artificial Intelligence" in the admin menu.
- Click "Provider Connections" to configure a new connection.
-
Add a New Connection
- Click "Add Connection", select a provider, and enter the required details.
- Example configurations are in the next section.
Example Configurations for Common Providers
You need to use a paid plan for all of these even when using models that are free from the web. Otherwise, you'll get various errors along the lines of insufficient_quota.
- DeepSeek
- Model name: e.g.
deepseek-chat. - Endpoint:
https://api.deepseek.com/v1/. - API Key: Generate one in DeepSeek Platform.
- Model name: e.g.
- Google Gemini
- Model name: e.g.
gemini-2.0-flash. - Endpoint:
https://generativelanguage.googleapis.com/v1beta/openai/. - API Key: Generate one in Google AI Studio.
- Model name: e.g.
- OpenAI
- Model name: e.g.
gpt-4o-mini. - Endpoint:
https://api.openai.com/v1/. - API Key: Generate one in OpenAI Platform.
- Model name: e.g.
Creating AI Profiles
After setting up a connection, you can create AI Profiles to interact with the configured model.
Recipes
You can add or update a connection using recipes. Below is a recipe for adding or updating a connection to the DeepSeek service:
{
"steps": [
{
"name": "AIProviderConnections",
"connections": [
{
"Source": "OpenAI",
"Name": "deepseek",
"IsDefault": false,
"DisplayText": "DeepSeek",
"Deployments": [
{ "Name": "deepseek-chat", "Type": "Chat", "IsDefault": true }
],
"Properties": {
"OpenAIConnectionMetadata": {
"Endpoint": "https://api.deepseek.com/v1",
"ApiKey": "<!-- DeepSeek API Key -->"
}
}
}
]
}
]
}
This recipe ensures that a DeepSeek connection is added or updated within the AI provider settings, with a typed Chat deployment. Replace <!-- DeepSeek API Key --> with a valid API key to authenticate the connection.
The old recipe format using ChatDeploymentName, UtilityDeploymentName, etc. on the connection object is still supported but deprecated. Migrate to the Deployments array format shown above.
If a connection with the same Name and Source already exists, the recipe updates its properties. Otherwise, it creates a new connection.
Data source (retrieval-augmented generation (RAG) / Knowledge Base) documentation is in the CrestApps.OrchardCore.AI.DataSources module: README.
For managing AI tools, see AI Tools.
For consuming AI services programmatically, see Consuming AI Services.
AI Chat with Workflows
See AI Workflows for details on using AI completion tasks in Orchard Core Workflows.
Deployments with AI Chat
The AI Services feature integrates with the Deployments module, allowing profiles to be deployed to various environments through Orchard Core's Deployment UI.
Compatibility
This module is fully compatible with OrchardCore v2.1 and later. However, if you are using OrchardCore versions between v2.1 and 3.0.0-preview-18562, you must install the CrestApps.OrchardCore.Resources module module into your web project. Then, enable the CrestApps.OrchardCore.Resources feature to ensure all required resource dependencies are available.