artkit.model.llm.ChatFromCompletionModel#

class artkit.model.llm.ChatFromCompletionModel(model, *, system_prompt=None, chat_template=None)[source]#

An adapter that turns a text completion generator into a chat system.

The chat system generates responses to user prompts by using a chat template to format the user prompt and the system prompt into a single prompt for the text generator.

The chat template must include the formatting keys system_prompt and user_prompt.

The default chat template is:

In the following conversation, a [USER] message is answered by an [AGENT].
{system_prompt}
[USER] {user_prompt} [/USER]
[AGENT]
Bases:

ChatModel, GenAIModelAdapter [~T_CompletionModel]

Generic types:

~T_CompletionModel(bound= CompletionModel)

Metaclasses:

ABCMeta

Parameters:

Method summary

get_model_params

Get the parameters of the model as a mapping.

get_response

Get a response, or multiple alternative responses, from the chat system.

postprocess_response

Post-process the response.

preprocess_prompt

Preprocess the user prompt before passing it to the text generator, substituting the system and user prompts into the chat template.

to_expression

Render this object as an expression.

with_system_prompt

Set the system prompt for the LLM system.

Attribute summary

DEFAULT_CHAT_TEMPLATE

The default chat template.

chat_template

The chat template to use.

model_id

The ID of the model to use.

system_prompt

The system prompt used to set up the LLM system.

model

The LLM system to wrap.

Definitions

get_model_params()#

Get the parameters of the model as a mapping.

This includes all parameters that influence the model’s behavior, but not parameters that determine the model itself or are are specific to the client such as the model ID or the API key.

Return type:

Mapping[str, Any]

Returns:

the model parameters

async get_response(message, *, history=None, **model_params)[source]#

Get a response, or multiple alternative responses, from the chat system.

Parameters:
  • message (str) – the user prompt to pass to the chat system

  • history (Optional[ChatHistory]) – the chat history preceding the message

  • model_params (dict[str, Any]) – additional parameters for the chat system

Return type:

list[str]

Returns:

the response or alternative responses generated by the chat system

Raises:

RequestLimitException – if an error occurs while communicating with the chat system

postprocess_response(response)[source]#

Post-process the response.

By default, strips leading and trailing whitespace, removes a trailing [/AGENT] tag and subsequent text if present, and returns the response as a single-item list.

Can be overridden in subclasses.

Parameters:

response (str) – the response to postprocess

Return type:

list[str]

Returns:

the post-processed response

preprocess_prompt(user_prompt)[source]#

Preprocess the user prompt before passing it to the text generator, substituting the system and user prompts into the chat template.

Parameters:

user_prompt (str) – the user prompt to preprocess

Return type:

str

Returns:

the prompt to be passed to the text generator

to_expression()#

Render this object as an expression.

Return type:

Expression

Returns:

the expression representing this object

with_system_prompt(system_prompt)[source]#

Set the system prompt for the LLM system.

Parameters:

system_prompt (str) – the system prompt to use

Return type:

ChatFromCompletionModel

Returns:

a new LLM system with the system prompt set

DEFAULT_CHAT_TEMPLATE: str = 'In the following conversation, a [USER] message is answered by an [AGENT].\n{system_prompt}\n[USER] {user_prompt} [/USER]\n[AGENT]'#

The default chat template.

property chat_template: str#

The chat template to use.

model: TypeVar(T_GenAIModel_ret, bound= GenAIModel, covariant=True)#

The LLM system to wrap.

property model_id: str#

The ID of the model to use.

property system_prompt: str | None#

The system prompt used to set up the LLM system.