LiteLLM¶
LiteLLM implementation running the async API client.
Attributes¶
-
model: the model name to use for the LLM e.g. "gpt-3.5-turbo" or "mistral/mistral-large", etc.
-
verbose: whether to log the LiteLLM client's logs. Defaults to
False
.
Runtime Parameters¶
- verbose: whether to log the LiteLLM client's logs. Defaults to
False
.