Transformers
TransformersLLM
¶
Bases: LLM
, CudaDevicePlacementMixin
Hugging Face transformers
library LLM implementation using the text generation
pipeline.
Attributes:
Name | Type | Description |
---|---|---|
model |
str
|
the model Hugging Face Hub repo id or a path to a directory containing the model weights and configuration files. |
revision |
str
|
if |
torch_dtype |
str
|
the torch dtype to use for the model e.g. "float16", "float32", etc.
Defaults to |
trust_remote_code |
bool
|
whether to trust or not remote (code in the Hugging Face Hub
repository) code to load the model. Defaults to |
model_kwargs |
Optional[Dict[str, Any]]
|
additional dictionary of keyword arguments that will be passed to
the |
tokenizer |
Optional[str]
|
the tokenizer Hugging Face Hub repo id or a path to a directory containing
the tokenizer config files. If not provided, the one associated to the |
use_fast |
bool
|
whether to use a fast tokenizer or not. Defaults to |
chat_template |
Optional[str]
|
a chat template that will be used to build the prompts before
sending them to the model. If not provided, the chat template defined in the
tokenizer config will be used. If not provided and the tokenizer doesn't have
a chat template, then ChatML template will be used. Defaults to |
device |
Optional[Union[str, int]]
|
the name or index of the device where the model will be loaded. Defaults
to |
device_map |
Optional[Union[str, Dict[str, Any]]]
|
a dictionary mapping each layer of the model to a device, or a mode
like |
token |
Optional[str]
|
the Hugging Face Hub token that will be used to authenticate to the Hugging
Face Hub. If not provided, the |
Source code in src/distilabel/llms/huggingface/transformers.py
34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 |
|
model_name: str
property
¶
Returns the model name used for the LLM.
generate(inputs, num_generations=1, max_new_tokens=128, temperature=0.1, repetition_penalty=1.1, top_p=1.0, top_k=0, do_sample=True)
¶
Generates num_generations
responses for each input using the text generation
pipeline.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
inputs |
List[ChatType]
|
a list of inputs in chat format to generate responses for. |
required |
num_generations |
int
|
the number of generations to create per input. Defaults to
|
1
|
max_new_tokens |
int
|
the maximum number of new tokens that the model will generate.
Defaults to |
128
|
temperature |
float
|
the temperature to use for the generation. Defaults to |
0.1
|
repetition_penalty |
float
|
the repetition penalty to use for the generation. Defaults
to |
1.1
|
top_p |
float
|
the top-p value to use for the generation. Defaults to |
1.0
|
top_k |
int
|
the top-k value to use for the generation. Defaults to |
0
|
do_sample |
bool
|
whether to use sampling or not. Defaults to |
True
|
Returns:
Type | Description |
---|---|
List[GenerateOutput]
|
A list of lists of strings containing the generated responses for each input. |
Source code in src/distilabel/llms/huggingface/transformers.py
get_last_hidden_states(inputs)
¶
Gets the last hidden_states
of the model for the given inputs. It doesn't
execute the task head.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
inputs |
List[ChatType]
|
a list of inputs in chat format to generate the embeddings for. |
required |
Returns:
Type | Description |
---|---|
List[HiddenState]
|
A list containing the last hidden state for each sequence using a NumPy array |
List[HiddenState]
|
with shape [num_tokens, hidden_size]. |
Source code in src/distilabel/llms/huggingface/transformers.py
load()
¶
Loads the model and tokenizer and creates the text generation pipeline. In addition, it will configure the tokenizer chat template.
Source code in src/distilabel/llms/huggingface/transformers.py
prepare_input(input)
¶
Prepares the input by applying the chat template to the input, which is formatted as an OpenAI conversation, and adding the generation prompt.