artkit.model.llm.gemini.GeminiChat#
- class artkit.model.llm.gemini.GeminiChat(*, model_id, api_key_env=None, initial_delay=1, exponential_base=2, jitter=True, max_retries=10, system_prompt=None, safety=True, **model_params)[source]#
An asynchronous Gemini LLM.
- Bases:
ChatModelConnector
[GenerativeModel
]- Metaclasses:
- Parameters:
model_id (
str
) – the ID of the model to useapi_key_env (
Optional
[str
]) – the environment variable that holds the API key; if not specified, use the default API key environment variable for the model as returned byget_default_api_key_env()
initial_delay (
float
) – the initial delay in seconds between client requests (defaults to1.0
)exponential_base (
float
) – the base for the exponential backoff (defaults to2.0
)jitter (
bool
) – whether to add jitter to the delay (defaults toTrue
)max_retries (
int
) – the maximum number of retries for client requests (defaults to5
) parameters withNone
values are considered unset and will be ignoredsystem_prompt (
Optional
[str
]) – the system prompt to initialise the chat system with (optional)safety (
bool
) – ifFalse
, disable all accessible safety categories; otherwise, enable them (default:True
):param model_params: additional model parameters, passed with every request;
Method summary
Get the API key from the environment variable specified by
api_key_env
.Get a shared client instance for this connector; return a cached instance if for this model's API key if available.
Get the default name of the environment variable that holds the API key.
Get the parameters of the model as a mapping.
Get a response, or multiple alternative responses, from the chat system.
Render this object as an expression.
Set the system prompt for the LLM system.
Attribute summary
The ID of the model to use.
The system prompt used to set up the LLM system.
If
False
, disable all accessible safety categories; otherwise, enable them.The environment variable that holds the API key.
The initial delay in seconds between client requests.
The base for the exponential backoff.
Whether to add jitter to the delay.
The maximum number of retries for client requests.
Additional model parameters, passed with every request.
Definitions
- get_api_key()#
Get the API key from the environment variable specified by
api_key_env
.- Return type:
- Returns:
the API key
- Raises:
ValueError – if the environment variable is not set
- get_client()#
Get a shared client instance for this connector; return a cached instance if for this model’s API key if available.
- Return type:
- Returns:
the shared client instance
- classmethod get_default_api_key_env()[source]#
Get the default name of the environment variable that holds the API key.
- Return type:
- Returns:
the default name of the api key environment variable
- get_model_params()[source]#
Get the parameters of the model as a mapping.
This includes all parameters that influence the model’s behavior, but not parameters that determine the model itself or are are specific to the client such as the model ID or the API key.
- async get_response(message, *, history=None, **model_params)[source]#
Get a response, or multiple alternative responses, from the chat system.
- Parameters:
- Return type:
- Returns:
the response or alternative responses generated by the chat system
- Raises:
RequestLimitException – if an error occurs while communicating with the chat system
- to_expression()#
Render this object as an expression.
- Return type:
Expression
- Returns:
the expression representing this object