Table of Contents

Class AIService

Namespace
Mythosia.AI.Services.Base
Assembly
Mythosia.AI.dll
public abstract class AIService : IAIService, IFunctionRegisterable
Inheritance
AIService
Implements
Derived
Inherited Members
Extension Methods

Constructors

AIService(string, string, HttpClient)

protected AIService(string apiKey, string baseUrl, HttpClient httpClient)

Parameters

apiKey string
baseUrl string
httpClient HttpClient

Fields

ApiKey

protected readonly string ApiKey

Field Value

string

HttpClient

protected readonly HttpClient HttpClient

Field Value

HttpClient

_chatRequests

protected List<ChatBlock> _chatRequests

Field Value

List<ChatBlock>

_structuredOutputSchemaJson

JSON schema string for structured output mode. Null when not in structured output mode. Set temporarily during GetCompletionAsync<T>() and cleared in finally block.

protected string? _structuredOutputSchemaJson

Field Value

string

Properties

ActivateChat

The currently active chat block containing conversation history.

public ChatBlock ActivateChat { get; protected set; }

Property Value

ChatBlock

ChatRequests

public IReadOnlyCollection<ChatBlock> ChatRequests { get; }

Property Value

IReadOnlyCollection<ChatBlock>

ConversationPolicy

When set, automatically summarizes old messages when conversation exceeds the configured threshold. The summary is injected as a system message prefix. Set to null to disable (default).

public SummaryConversationPolicy? ConversationPolicy { get; set; }

Property Value

SummaryConversationPolicy

CurrentPolicy

protected FunctionCallingPolicy CurrentPolicy { get; set; }

Property Value

FunctionCallingPolicy

DefaultPolicy

public FunctionCallingPolicy DefaultPolicy { get; set; }

Property Value

FunctionCallingPolicy

EnableFunctions

public bool EnableFunctions { get; set; }

Property Value

bool

ForceFunctionName

public string ForceFunctionName { get; set; }

Property Value

string

FrequencyPenalty

public float FrequencyPenalty { get; set; }

Property Value

float

FunctionCallMode

public FunctionCallMode FunctionCallMode { get; set; }

Property Value

FunctionCallMode

Functions

public List<FunctionDefinition> Functions { get; set; }

Property Value

List<FunctionDefinition>

FunctionsDisabled

Quick toggle for function calling (like StatelessMode)

public bool FunctionsDisabled { get; set; }

Property Value

bool

MaxMessageCount

public uint MaxMessageCount { get; set; }

Property Value

uint

MaxTokens

public uint MaxTokens { get; set; }

Property Value

uint

Model

The AI model identifier currently in use.

public string Model { get; protected set; }

Property Value

string

PresencePenalty

public float PresencePenalty { get; set; }

Property Value

float

Provider

The AI provider for this service

public abstract string Provider { get; }

Property Value

string

ShouldUseFunctions

public bool ShouldUseFunctions { get; }

Property Value

bool

StatelessMode

When true, each request is processed independently without maintaining conversation history

public bool StatelessMode { get; set; }

Property Value

bool

Stream

public bool Stream { get; set; }

Property Value

bool

StructuredOutputMaxRetries

Maximum number of auto-correction retries when the LLM produces invalid JSON for structured output. Default is 2. This is NOT a network/rate-limit retry โ€” it is an "output quality/format correction" retry that sends a correction prompt asking the model to fix its JSON output.

public int StructuredOutputMaxRetries { get; set; }

Property Value

int

SystemMessage

Convenience property for ActivateChat.SystemMessage

public string SystemMessage { get; set; }

Property Value

string

Temperature

public float Temperature { get; set; }

Property Value

float

TopP

public float TopP { get; set; }

Property Value

float

Methods

AddNewChat()

public void AddNewChat()

AddNewChat(ChatBlock)

public void AddNewChat(ChatBlock newChat)

Parameters

newChat ChatBlock

ApplyProviderSpecificRequestProfile(AIRequestProfile)

protected virtual Action ApplyProviderSpecificRequestProfile(AIRequestProfile profile)

Parameters

profile AIRequestProfile

Returns

Action

ApplyRequestContext(AIRequestContext)

protected virtual Action ApplyRequestContext(AIRequestContext context)

Parameters

context AIRequestContext

Returns

Action

ApplyRequestProfile(AIRequestProfile)

protected virtual Action ApplyRequestProfile(AIRequestProfile profile)

Parameters

profile AIRequestProfile

Returns

Action

ApplySummaryPolicyIfNeededAsync()

Checks whether the conversation should be summarized based on the current ConversationPolicy, and if so, performs the summarization using StatelessMode. Called automatically at the beginning of GetCompletionAsync(string). For streaming scenarios, call this explicitly before StreamAsync().

public Task ApplySummaryPolicyIfNeededAsync()

Returns

Task

BeginStream(string)

Begin a streaming structured output run. Returns a StreamBuilder for fluent configuration.

Example:

var run = service.BeginStream(prompt)
    .WithStructuredOutput(new StructuredOutputPolicy { MaxRepairAttempts = 2 })
    .As<MyDto>();

await foreach (var chunk in run.Stream(ct)) Console.Write(chunk);

MyDto dto = await run.Result;

public StreamBuilder BeginStream(string prompt)

Parameters

prompt string

The user prompt to send to the LLM.

Returns

StreamBuilder

A StreamBuilder for fluent configuration.

ChangeModel(string)

public void ChangeModel(string model)

Parameters

model string

CopyFrom(AIService)

public AIService CopyFrom(AIService sourceService)

Parameters

sourceService AIService

Returns

AIService

CreateFunctionMessageRequest()

Creates HTTP request with function definitions

protected abstract HttpRequestMessage CreateFunctionMessageRequest()

Returns

HttpRequestMessage

CreateMessageRequest()

Creates the HTTP request message for the AI service

protected abstract HttpRequestMessage CreateMessageRequest()

Returns

HttpRequestMessage

EnsureUserFirstMessage(List<Message>)

Ensures the message list starts with a User message. Some APIs (Gemini, Claude) require conversations to begin with a user turn. If the first message is not from a user, a synthetic context message is prepended.

protected static void EnsureUserFirstMessage(List<Message> messages)

Parameters

messages List<Message>

ExtractFunctionCall(string)

Extracts function call from API response

protected abstract (string content, FunctionCall functionCall) ExtractFunctionCall(string response)

Parameters

response string

Returns

(string content, FunctionCall functionCall)

ExtractResponseContent(string)

Extracts the response content from the API response

protected abstract string ExtractResponseContent(string responseContent)

Parameters

responseContent string

Returns

string

GenerateImageAsync(string, string)

Generates an image from a text prompt

public abstract Task<byte[]> GenerateImageAsync(string prompt, string size = "1024x1024")

Parameters

prompt string
size string

Returns

Task<byte[]>

GenerateImageUrlAsync(string, string)

Generates an image URL from a text prompt

public abstract Task<string> GenerateImageUrlAsync(string prompt, string size = "1024x1024")

Parameters

prompt string
size string

Returns

Task<string>

GetCompletionAsync(Message)

public abstract Task<string> GetCompletionAsync(Message message)

Parameters

message Message

Returns

Task<string>

GetCompletionAsync(Message, AIRequestProfile?, AIRequestContext?)

public virtual Task<string> GetCompletionAsync(Message message, AIRequestProfile? profile = null, AIRequestContext? context = null)

Parameters

message Message
profile AIRequestProfile
context AIRequestContext

Returns

Task<string>

GetCompletionAsync(string, AIRequestProfile?, AIRequestContext?)

public virtual Task<string> GetCompletionAsync(string prompt, AIRequestProfile? profile = null, AIRequestContext? context = null)

Parameters

prompt string
profile AIRequestProfile
context AIRequestContext

Returns

Task<string>

GetCompletionAsync<T>(string)

Sends a prompt and deserializes the LLM response to the specified type. Internally generates a JSON schema from T, instructs the LLM to respond in that format, and deserializes the JSON response. If the LLM produces invalid JSON, sends an auto-correction prompt and retries up to StructuredOutputMaxRetries times.

public Task<T> GetCompletionAsync<T>(string prompt) where T : class

Parameters

prompt string

The user prompt.

Returns

Task<T>

The deserialized response object.

Type Parameters

T

The type to deserialize the response to. Must have public properties.

Exceptions

StructuredOutputException

Thrown when deserialization fails after all retry attempts.

GetCompletionWithImageAsync(string, string)

public virtual Task<string> GetCompletionWithImageAsync(string prompt, string imagePath)

Parameters

prompt string
imagePath string

Returns

Task<string>

GetCompletionWithImageUrlAsync(string, string)

public virtual Task<string> GetCompletionWithImageUrlAsync(string prompt, string imageUrl)

Parameters

prompt string
imageUrl string

Returns

Task<string>

GetEffectiveMaxTokens()

Returns the effective max tokens, capped by the current model's limit. Use this instead of MaxTokens when building request bodies.

protected uint GetEffectiveMaxTokens()

Returns

uint

GetEffectiveSystemMessageWithRequestContext()

protected string GetEffectiveSystemMessageWithRequestContext()

Returns

string

GetInputTokenCountAsync()

Gets the token count for the current conversation

public abstract Task<uint> GetInputTokenCountAsync()

Returns

Task<uint>

GetInputTokenCountAsync(string)

Gets the token count for a specific prompt

public abstract Task<uint> GetInputTokenCountAsync(string prompt)

Parameters

prompt string

Returns

Task<uint>

GetLatestMessages()

Gets the latest messages from the active chat up to MaxMessageCount

protected IEnumerable<Message> GetLatestMessages()

Returns

IEnumerable<Message>

GetLatestMessagesWithFunctionFallback()

Gets messages for non-function path, converting function-related messages to plain text. Original messages in ChatBlock are never modified.

protected IEnumerable<Message> GetLatestMessagesWithFunctionFallback()

Returns

IEnumerable<Message>

GetModelMaxOutputTokens()

Returns the maximum output tokens allowed for the current model. Override in each service to provide model-specific limits.

protected virtual uint GetModelMaxOutputTokens()

Returns

uint

GetStructuredOutputInstruction()

Returns the structured output instruction to append to system messages. Returns null if not in structured output mode.

protected string? GetStructuredOutputInstruction()

Returns

string

ProcessFunctionCallAsync(string, Dictionary<string, object>)

Process function call

protected virtual Task<string> ProcessFunctionCallAsync(string functionName, Dictionary<string, object> arguments)

Parameters

functionName string
arguments Dictionary<string, object>

Returns

Task<string>

QuickAskAsync(string, string, string)

public static Task<string> QuickAskAsync(string apiKey, string prompt, string model = "gpt-4o-mini")

Parameters

apiKey string
prompt string
model string

Returns

Task<string>

QuickAskWithImageAsync(string, string, string, string)

public static Task<string> QuickAskWithImageAsync(string apiKey, string prompt, string imagePath, string model = "gpt-4-vision-preview")

Parameters

apiKey string
prompt string
imagePath string
model string

Returns

Task<string>

RunAgentAsync(string, int)

Runs a ReAct (Reasoning + Acting) agent loop that repeatedly calls the LLM and executes function calls until the goal is achieved or maxSteps is exceeded.

Reuses existing function calling infrastructure registered via WithFunction. The loop terminates when the LLM returns a text response without any function calls, or when maxSteps is exceeded.

public virtual Task<string> RunAgentAsync(string goal, int maxSteps = 10)

Parameters

goal string

The goal or task for the agent to accomplish

maxSteps int

Maximum number of agent steps (LLM round-trips) to prevent infinite loops. Default is 10.

Returns

Task<string>

The final text response from the LLM after completing the goal

Exceptions

AgentMaxStepsExceededException

Thrown when maxSteps is exceeded without a final answer. The exception contains a PartialResponse property with the last assistant message, if any.

SetActivateChat(string)

public void SetActivateChat(string chatBlockId)

Parameters

chatBlockId string

StreamAsync(Message, AIRequestContext?, CancellationToken)

Simple text streaming with Message input

public IAsyncEnumerable<string> StreamAsync(Message message, AIRequestContext? context = null, CancellationToken cancellationToken = default)

Parameters

message Message
context AIRequestContext
cancellationToken CancellationToken

Returns

IAsyncEnumerable<string>

StreamAsync(Message, StreamOptions, AIRequestContext?, CancellationToken)

Core streaming implementation using Template Method pattern. Manages the round loop, StatelessMode, and conversation summary policy. Providers override StreamRoundAsync(StreamOptions, bool, FunctionCallingPolicy, CancellationToken) to handle a single round. Providers that do not support function calling rounds (e.g., DeepSeek, Sonar) may override this method directly.

public virtual IAsyncEnumerable<StreamingContent> StreamAsync(Message message, StreamOptions options, AIRequestContext? context = null, CancellationToken cancellationToken = default)

Parameters

message Message
options StreamOptions
context AIRequestContext
cancellationToken CancellationToken

Returns

IAsyncEnumerable<StreamingContent>

StreamAsync(string, StreamOptions, CancellationToken)

Advanced streaming with options

public IAsyncEnumerable<StreamingContent> StreamAsync(string prompt, StreamOptions options, CancellationToken cancellationToken = default)

Parameters

prompt string
options StreamOptions
cancellationToken CancellationToken

Returns

IAsyncEnumerable<StreamingContent>

StreamAsync(string, CancellationToken)

Simple text streaming (most common use case)

public IAsyncEnumerable<string> StreamAsync(string prompt, CancellationToken cancellationToken = default)

Parameters

prompt string
cancellationToken CancellationToken

Returns

IAsyncEnumerable<string>

StreamCompletionAsync(Message, Func<string, Task>)

public abstract Task StreamCompletionAsync(Message message, Func<string, Task> messageReceivedAsync)

Parameters

message Message
messageReceivedAsync Func<string, Task>

Returns

Task

StreamCompletionAsync(string, Action<string>)

public virtual Task StreamCompletionAsync(string prompt, Action<string> messageReceived)

Parameters

prompt string
messageReceived Action<string>

Returns

Task

StreamCompletionAsync(string, Func<string, Task>)

public virtual Task StreamCompletionAsync(string prompt, Func<string, Task> messageReceivedAsync)

Parameters

prompt string
messageReceivedAsync Func<string, Task>

Returns

Task

StreamCoreAsync(Message, StreamOptions, CancellationToken)

Core streaming loop. Override this method to replace the full streaming pipeline (round loop, StatelessMode, summary policy). Most providers should override StreamRoundAsync(StreamOptions, bool, FunctionCallingPolicy, CancellationToken) instead.

protected virtual IAsyncEnumerable<StreamingContent> StreamCoreAsync(Message message, StreamOptions options, CancellationToken cancellationToken = default)

Parameters

message Message
options StreamOptions
cancellationToken CancellationToken

Returns

IAsyncEnumerable<StreamingContent>

StreamOnceAsync(Message, CancellationToken)

Streams as one-off query without affecting conversation history

public IAsyncEnumerable<string> StreamOnceAsync(Message message, CancellationToken cancellationToken = default)

Parameters

message Message
cancellationToken CancellationToken

Returns

IAsyncEnumerable<string>

StreamOnceAsync(string, CancellationToken)

Streams as one-off query without affecting conversation history

public IAsyncEnumerable<string> StreamOnceAsync(string prompt, CancellationToken cancellationToken = default)

Parameters

prompt string
cancellationToken CancellationToken

Returns

IAsyncEnumerable<string>

StreamParseJson(string)

Parses streaming JSON data

protected abstract string StreamParseJson(string jsonData)

Parameters

jsonData string

Returns

string

StreamRoundAsync(StreamOptions, bool, FunctionCallingPolicy, CancellationToken)

Executes a single streaming round: sends an HTTP request, reads the SSE stream, yields chunks, and handles function execution if detected. Yield a FunctionResult to signal the template to continue to the next round; otherwise the stream ends.

protected virtual IAsyncEnumerable<StreamingContent> StreamRoundAsync(StreamOptions options, bool useFunctions, FunctionCallingPolicy policy, CancellationToken cancellationToken)

Parameters

options StreamOptions
useFunctions bool
policy FunctionCallingPolicy
cancellationToken CancellationToken

Returns

IAsyncEnumerable<StreamingContent>