artkit.model.llm.ChatFromCompletionModel#
- class artkit.model.llm.ChatFromCompletionModel(model, *, system_prompt=None, chat_template=None)[source]#
An adapter that turns a text completion generator into a chat system.
The chat system generates responses to user prompts by using a chat template to format the user prompt and the system prompt into a single prompt for the text generator.
The chat template must include the formatting keys
system_prompt
anduser_prompt
.The default chat template is:
In the following conversation, a [USER] message is answered by an [AGENT]. {system_prompt} [USER] {user_prompt} [/USER] [AGENT]
- Bases:
ChatModel
,GenAIModelAdapter
[~T_CompletionModel]- Generic types:
~T_CompletionModel(bound=
CompletionModel
)- Metaclasses:
- Parameters:
model (
ChatFromCompletionModel
) – the text generator to use as the basis for the chat systemsystem_prompt (
Optional
[str
]) – the system prompt to use (optional)chat_template (
Optional
[str
]) – the chat template to use (optional, defaults toDEFAULT_CHAT_TEMPLATE
)
Method summary
Get the parameters of the model as a mapping.
Get a response, or multiple alternative responses, from the chat system.
Post-process the response.
Preprocess the user prompt before passing it to the text generator, substituting the system and user prompts into the chat template.
Render this object as an expression.
Set the system prompt for the LLM system.
Attribute summary
The default chat template.
The chat template to use.
The ID of the model to use.
The system prompt used to set up the LLM system.
The LLM system to wrap.
Definitions
- get_model_params()#
Get the parameters of the model as a mapping.
This includes all parameters that influence the model’s behavior, but not parameters that determine the model itself or are are specific to the client such as the model ID or the API key.
- async get_response(message, *, history=None, **model_params)[source]#
Get a response, or multiple alternative responses, from the chat system.
- Parameters:
- Return type:
- Returns:
the response or alternative responses generated by the chat system
- Raises:
RequestLimitException – if an error occurs while communicating with the chat system
- postprocess_response(response)[source]#
Post-process the response.
By default, strips leading and trailing whitespace, removes a trailing
[/AGENT]
tag and subsequent text if present, and returns the response as a single-item list.Can be overridden in subclasses.
- preprocess_prompt(user_prompt)[source]#
Preprocess the user prompt before passing it to the text generator, substituting the system and user prompts into the chat template.
- to_expression()#
Render this object as an expression.
- Return type:
Expression
- Returns:
the expression representing this object
- with_system_prompt(system_prompt)[source]#
Set the system prompt for the LLM system.
- Parameters:
system_prompt (
str
) – the system prompt to use- Return type:
- Returns:
a new LLM system with the system prompt set
-
DEFAULT_CHAT_TEMPLATE:
str
= 'In the following conversation, a [USER] message is answered by an [AGENT].\n{system_prompt}\n[USER] {user_prompt} [/USER]\n[AGENT]'# The default chat template.
-
model:
TypeVar
(T_GenAIModel_ret
, bound=GenAIModel
, covariant=True)# The LLM system to wrap.