Skip to main content

AI Providers

AI Providers are modules that connect the CrestApps AI infrastructure to specific AI services. Each provider knows how to create chat clients, embedding generators, and image generators for its platform.

What Is a Provider?

A provider is a module that implements the connection layer between CrestApps AI Services and a specific AI platform. Providers handle:

  • Authentication — Managing API keys, tokens, or managed identity credentials
  • Client creation — Creating IChatClient, IEmbeddingGenerator, and IImageGenerator instances
  • Connection configuration — Defining endpoints, deployment names, and provider-specific settings
  • Deployment management — Supporting multiple typed models/deployments under a single connection, each with a Type (Chat, Utility, Embedding, Image, SpeechToText) and an IsDefault flag

Built-in Providers

ProviderModuleDescription
Azure AI InferenceCrestApps.OrchardCore.AzureAIInferenceGitHub models via Azure AI Inference
Azure OpenAICrestApps.OrchardCore.OpenAI.AzureAzure OpenAI Service integration
OllamaCrestApps.OrchardCore.OllamaLocal model support via Ollama
OpenAICrestApps.OrchardCore.OpenAIOpenAI and any OpenAI-compatible provider

Tip: Most modern AI providers offer APIs that follow the OpenAI API standard. For these providers, use the OpenAI provider type when configuring their connections and endpoints. This includes DeepSeek, Google Gemini, Together AI, vLLM, and many more.

Implementing a Custom Provider

To create a custom AI provider, you need to implement two key interfaces:

1. Implement IAIClientProvider

This interface is responsible for creating AI clients for your provider:

public sealed class CustomAIClientProvider : IAIClientProvider
{
public string ProviderName => "CustomProvider";

public ValueTask<IChatClient> CreateChatClientAsync(
AIProviderConnection connection, string deploymentName)
{
// Create and return an IChatClient for your provider
}

public ValueTask<IEmbeddingGenerator<string, Embedding<float>>> CreateEmbeddingGeneratorAsync(
AIProviderConnection connection, string deploymentName)
{
// Create and return an embedding generator
}
}

2. Implement IAICompletionClient

Use the NamedAICompletionClient base class for standard providers, or DeploymentAwareAICompletionClient if your provider supports multiple deployments:

public sealed class CustomCompletionClient : NamedAICompletionClient
{
public CustomCompletionClient(
IAIClientFactory aIClientFactory,
ILoggerFactory loggerFactory,
IDistributedCache distributedCache,
IOptions<AIProviderOptions> providerOptions,
IEnumerable<IAICompletionServiceHandler> handlers,
IOptions<DefaultAIOptions> defaultOptions
) : base(
"CustomSource",
aIClientFactory, distributedCache,
loggerFactory,
providerOptions.Value,
defaultOptions.Value,
handlers)
{
}

protected override string ProviderName => "CustomProvider";

protected override IChatClient GetChatClient(
AIProviderConnection connection,
AICompletionContext context,
string deploymentName)
{
return new YourAIClient(connection.GetApiKey())
.AsChatClient(deploymentName);
}
}

3. Register Services

public sealed class Startup : StartupBase
{
private readonly IStringLocalizer S;

public Startup(IStringLocalizer<Startup> stringLocalizer)
{
S = stringLocalizer;
}

public override void ConfigureServices(IServiceCollection services)
{
services
.AddScoped<IAIClientProvider, CustomAIClientProvider>()
.AddAIProfile<CustomCompletionClient>("CustomSource", "CustomProvider", o =>
{
o.DisplayName = S["Custom Provider"];
o.Description = S["Provides AI profiles using custom source."];
});
}
}

Supporting Multiple Deployments

If your provider supports multiple models, register a deployment provider:

services.AddAIDeploymentProvider("CustomProvider", options =>
{
options.DisplayName = _localizer["Custom Provider"];
options.Description = _localizer["Custom provider deployments."];
});

Typed Deployments

Deployments are now first-class typed entities. Each deployment has a Type property that indicates its purpose:

TypePurpose
ChatPrimary chat completions
UtilityLightweight auxiliary tasks (query rewriting, planning)
EmbeddingVector embeddings for RAG / semantic search
ImageImage generation
SpeechToTextSpeech-to-text transcription

When configuring connections via appsettings.json, deployments are defined as a Deployments array on each connection:

{
"Connections": {
"my-connection": {
"ApiKey": "...",
"Deployments": [
{ "Name": "gpt-4o", "Type": "Chat", "IsDefault": true },
{ "Name": "gpt-4o-mini", "Type": "Utility", "IsDefault": true },
{ "Name": "text-embedding-3-large", "Type": "Embedding", "IsDefault": true }
]
}
}
}

The IsDefault flag marks a deployment as the default for its type within that connection. The system resolves deployments using a fallback chain: explicit assignment → connection default for type → global default → null/error.

Global defaults can be configured under Settings → Artificial Intelligence → Default AI Deployment Settings.