anyscale
AnyscaleLLM
¶
Bases: OpenAILLM
Source code in src/distilabel/llm/anyscale.py
__init__(model, task, client=None, api_key=None, max_new_tokens=128, frequency_penalty=0.0, presence_penalty=0.0, temperature=1.0, top_p=1.0, num_threads=None, prompt_format=None, prompt_formatting_fn=None)
¶
Initializes the AnyscaleLLM class.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model |
str
|
the model to be used for generation. |
required |
task |
Task
|
the task to be performed by the LLM. |
required |
client |
Union[OpenAI, None]
|
an OpenAI client to be used for generation.
If |
None
|
api_key |
Union[str, None]
|
the Anyscale API key to be used for generation.
If |
None
|
max_new_tokens |
int
|
the maximum number of tokens to be generated. Defaults to 128. |
128
|
frequency_penalty |
float
|
the frequency penalty to be used for generation. Defaults to 0.0. |
0.0
|
presence_penalty |
float
|
the presence penalty to be used for generation. Defaults to 0.0. |
0.0
|
temperature |
float
|
the temperature to be used for generation. Defaults to 1.0. |
1.0
|
top_p |
float
|
the top-p value to be used for generation. Defaults to 1.0. |
1.0
|
num_threads |
Union[int, None]
|
the number of threads to be used
for parallel generation. If |
None
|
prompt_format |
Union[SupportedFormats, None]
|
the format to be used
for the prompt. If |
None
|
prompt_formatting_fn |
Union[Callable[..., str], None]
|
a function to be
applied to the prompt before generation. If |
None
|
Raises:
Type | Description |
---|---|
AssertionError
|
if the provided |
Examples:
>>> from distilabel.tasks import TextGenerationTask
>>> from distilabel.llm import AnyscaleLLM
>>> llm = AnyscaleLLM(model="HuggingFaceH4/zephyr-7b-beta", task=TextGenerationTask())
>>> llm.generate([{"input": "What's the capital of Spain?"}])