LlamaCppLLM¶
LlamaCppLLM
¶
Bases: LLM
llama.cpp LLM implementation running the Python bindings for the C++ code.
Attributes:
Name | Type | Description |
---|---|---|
model_path |
RuntimeParameter[FilePath]
|
contains the path to the GGUF quantized model, compatible with the
installed version of the |
n_gpu_layers |
RuntimeParameter[int]
|
the number of layers to use for the GPU. Defaults to |
chat_format |
Optional[RuntimeParameter[str]]
|
the chat format to use for the model. Defaults to |
n_ctx |
int
|
the context size to use for the model. Defaults to |
n_batch |
int
|
the prompt processing maximum batch size to use for the model. Defaults to |
seed |
int
|
random seed to use for the generation. Defaults to |
verbose |
RuntimeParameter[bool]
|
whether to print verbose output. Defaults to |
structured_output |
RuntimeParameter[bool]
|
a dictionary containing the structured output configuration or if more
fine-grained control is needed, an instance of |
extra_kwargs |
Optional[RuntimeParameter[Dict[str, Any]]]
|
additional dictionary of keyword arguments that will be passed to the
|
_model |
Optional[Llama]
|
the Llama model instance. This attribute is meant to be used internally and
should not be accessed directly. It will be set in the |
Runtime parameters
model_path
: the path to the GGUF quantized model.n_gpu_layers
: the number of layers to use for the GPU. Defaults to-1
.chat_format
: the chat format to use for the model. Defaults toNone
.verbose
: whether to print verbose output. Defaults toFalse
.extra_kwargs
: additional dictionary of keyword arguments that will be passed to theLlama
class ofllama_cpp
library. Defaults to{}
.
References
Source code in src/distilabel/llms/llamacpp.py
30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 |
|
model_name: str
property
¶
Returns the model name used for the LLM.
generate(inputs, num_generations=1, max_new_tokens=128, frequency_penalty=0.0, presence_penalty=0.0, temperature=1.0, top_p=1.0, extra_generation_kwargs=None)
¶
Generates num_generations
responses for the given input using the Llama model.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
inputs |
List[ChatType]
|
a list of inputs in chat format to generate responses for. |
required |
num_generations |
int
|
the number of generations to create per input. Defaults to
|
1
|
max_new_tokens |
int
|
the maximum number of new tokens that the model will generate.
Defaults to |
128
|
frequency_penalty |
float
|
the repetition penalty to use for the generation. Defaults
to |
0.0
|
presence_penalty |
float
|
the presence penalty to use for the generation. Defaults to
|
0.0
|
temperature |
float
|
the temperature to use for the generation. Defaults to |
1.0
|
top_p |
float
|
the top-p value to use for the generation. Defaults to |
1.0
|
extra_generation_kwargs |
Optional[Dict[str, Any]]
|
dictionary with additional arguments to be passed to
the |
None
|
Returns:
Type | Description |
---|---|
List[GenerateOutput]
|
A list of lists of strings containing the generated responses for each input. |
Source code in src/distilabel/llms/llamacpp.py
load()
¶
Loads the Llama
model from the model_path
.