vLLM¶
vLLM
¶
Bases: LLM
, CudaDevicePlacementMixin
vLLM
library LLM implementation.
Attributes:
Name | Type | Description |
---|---|---|
model |
str
|
the model Hugging Face Hub repo id or a path to a directory containing the model weights and configuration files. |
model_kwargs |
Optional[RuntimeParameter[Dict[str, Any]]]
|
additional dictionary of keyword arguments that will be passed to
the |
chat_template |
Optional[str]
|
a chat template that will be used to build the prompts before
sending them to the model. If not provided, the chat template defined in the
tokenizer config will be used. If not provided and the tokenizer doesn't have
a chat template, then ChatML template will be used. Defaults to |
_model |
Optional[LLM]
|
the |
_tokenizer |
Optional[PreTrainedTokenizer]
|
the tokenizer instance used to format the prompt before passing it to
the |
Runtime parameters
model_kwargs
: additional dictionary of keyword arguments that will be passed to theLLM
class ofvllm
library.
Source code in src/distilabel/llms/vllm.py
34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 |
|
model_name: str
property
¶
Returns the model name used for the LLM.
generate(inputs, num_generations=1, max_new_tokens=128, frequency_penalty=0.0, presence_penalty=0.0, temperature=1.0, top_p=1.0, top_k=-1, extra_sampling_params=None)
¶
Generates num_generations
responses for each input using the text generation
pipeline.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
inputs |
List[ChatType]
|
a list of inputs in chat format to generate responses for. |
required |
num_generations |
int
|
the number of generations to create per input. Defaults to
|
1
|
max_new_tokens |
int
|
the maximum number of new tokens that the model will generate.
Defaults to |
128
|
frequency_penalty |
float
|
the repetition penalty to use for the generation. Defaults
to |
0.0
|
presence_penalty |
float
|
the presence penalty to use for the generation. Defaults to
|
0.0
|
temperature |
float
|
the temperature to use for the generation. Defaults to |
1.0
|
top_p |
float
|
the top-p value to use for the generation. Defaults to |
1.0
|
top_k |
int
|
the top-k value to use for the generation. Defaults to |
-1
|
extra_sampling_params |
Optional[Dict[str, Any]]
|
dictionary with additional arguments to be passed to
the |
None
|
Returns:
Type | Description |
---|---|
List[GenerateOutput]
|
A list of lists of strings containing the generated responses for each input. |
Source code in src/distilabel/llms/vllm.py
load()
¶
Loads the vLLM
model using either the path or the Hugging Face Hub repository id.
Additionally, this method also sets the chat_template
for the tokenizer, so as to properly
parse the list of OpenAI formatted inputs using the expected format by the model, otherwise, the
default value is ChatML format, unless explicitly provided.
Source code in src/distilabel/llms/vllm.py
prepare_input(input)
¶
Prepares the input by applying the chat template to the input, which is formatted as an OpenAI conversation, and adding the generation prompt.