Embedding Gallery¶
This section contains the existing Embeddings
subclasses implemented in distilabel
.
embeddings
¶
LlamaCppEmbeddings
¶
Bases: Embeddings
, CudaDevicePlacementMixin
LlamaCpp
library implementation for embedding generation.
Attributes:
Name | Type | Description |
---|---|---|
model_name |
str
|
contains the name of the GGUF quantized model, compatible with the
installed version of the |
model_path |
RuntimeParameter[str]
|
contains the path to the GGUF quantized model, compatible with the
installed version of the |
repo_id |
RuntimeParameter[str]
|
the Hugging Face Hub repository id. |
verbose |
RuntimeParameter[bool]
|
whether to print verbose output. Defaults to |
n_gpu_layers |
RuntimeParameter[int]
|
number of layers to run on the GPU. Defaults to |
disable_cuda_device_placement |
RuntimeParameter[bool]
|
whether to disable CUDA device placement. Defaults to |
normalize_embeddings |
RuntimeParameter[bool]
|
whether to normalize the embeddings. Defaults to |
seed |
int
|
RNG seed, -1 for random |
n_ctx |
int
|
Text context, 0 = from model |
n_batch |
int
|
Prompt processing maximum batch size |
extra_kwargs |
Optional[RuntimeParameter[Dict[str, Any]]]
|
additional dictionary of keyword arguments that will be passed to the
|
Runtime parameters
n_gpu_layers
: the number of layers to use for the GPU. Defaults to-1
.verbose
: whether to print verbose output. Defaults toFalse
.normalize_embeddings
: whether to normalize the embeddings. Defaults toFalse
.extra_kwargs
: additional dictionary of keyword arguments that will be passed to theLlama
class ofllama_cpp
library. Defaults to{}
.
References
Examples:
Generate sentence embeddings using a local model:
from pathlib import Path
from distilabel.models.embeddings import LlamaCppEmbeddings
# You can follow along this example downloading the following model running the following
# command in the terminal, that will download the model to the `Downloads` folder:
# curl -L -o ~/Downloads/all-MiniLM-L6-v2-Q2_K.gguf https://huggingface.co/second-state/All-MiniLM-L6-v2-Embedding-GGUF/resolve/main/all-MiniLM-L6-v2-Q2_K.gguf
model_path = "Downloads/"
model = "all-MiniLM-L6-v2-Q2_K.gguf"
embeddings = LlamaCppEmbeddings(
model=model,
model_path=str(Path.home() / model_path),
)
embeddings.load()
results = embeddings.encode(inputs=["distilabel is awesome!", "and Argilla!"])
print(results)
embeddings.unload()
Generate sentence embeddings using a HuggingFace Hub model:
from distilabel.models.embeddings import LlamaCppEmbeddings
# You need to set environment variable to download private model to the local machine
repo_id = "second-state/All-MiniLM-L6-v2-Embedding-GGUF"
model = "all-MiniLM-L6-v2-Q2_K.gguf"
embeddings = LlamaCppEmbeddings(model=model,repo_id=repo_id)
embeddings.load()
results = embeddings.encode(inputs=["distilabel is awesome!", "and Argilla!"])
print(results)
embeddings.unload()
# [
# [-0.05447685346007347, -0.01623094454407692, ...],
# [4.4889533455716446e-05, 0.044016145169734955, ...],
# ]
Generate sentence embeddings with cpu:
from pathlib import Path
from distilabel.models.embeddings import LlamaCppEmbeddings
# You can follow along this example downloading the following model running the following
# command in the terminal, that will download the model to the `Downloads` folder:
# curl -L -o ~/Downloads/all-MiniLM-L6-v2-Q2_K.gguf https://huggingface.co/second-state/All-MiniLM-L6-v2-Embedding-GGUF/resolve/main/all-MiniLM-L6-v2-Q2_K.gguf
model_path = "Downloads/"
model = "all-MiniLM-L6-v2-Q2_K.gguf"
embeddings = LlamaCppEmbeddings(
model=model,
model_path=str(Path.home() / model_path),
n_gpu_layers=0,
disable_cuda_device_placement=True,
)
embeddings.load()
results = embeddings.encode(inputs=["distilabel is awesome!", "and Argilla!"])
print(results)
embeddings.unload()
# [
# [-0.05447685346007347, -0.01623094454407692, ...],
# [4.4889533455716446e-05, 0.044016145169734955, ...],
# ]
Source code in src/distilabel/models/embeddings/llamacpp.py
28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 |
|
model_name
property
¶
Returns the name of the model.
load()
¶
Loads the gguf
model using either the path or the Hugging Face Hub repository id.
Source code in src/distilabel/models/embeddings/llamacpp.py
unload()
¶
encode(inputs)
¶
Generates embeddings for the provided inputs.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
inputs
|
List[str]
|
a list of texts for which an embedding has to be generated. |
required |
Returns:
Type | Description |
---|---|
List[List[Union[int, float]]]
|
The generated embeddings. |
Source code in src/distilabel/models/embeddings/llamacpp.py
SentenceTransformerEmbeddings
¶
Bases: Embeddings
, CudaDevicePlacementMixin
sentence-transformers
library implementation for embedding generation.
Attributes:
Name | Type | Description |
---|---|---|
model |
str
|
the model Hugging Face Hub repo id or a path to a directory containing the model weights and configuration files. |
device |
Optional[RuntimeParameter[str]]
|
the name of the device used to load the model e.g. "cuda", "mps", etc.
Defaults to |
prompts |
Optional[Dict[str, str]]
|
a dictionary containing prompts to be used with the model. Defaults to
|
default_prompt_name |
Optional[str]
|
the default prompt (in |
trust_remote_code |
bool
|
whether to allow fetching and executing remote code fetched
from the repository in the Hub. Defaults to |
revision |
Optional[str]
|
if |
token |
Optional[str]
|
the Hugging Face Hub token that will be used to authenticate to the Hugging
Face Hub. If not provided, the |
truncate_dim |
Optional[int]
|
the dimension to truncate the sentence embeddings. Defaults to |
model_kwargs |
Optional[Dict[str, Any]]
|
extra kwargs that will be passed to the Hugging Face |
tokenizer_kwargs |
Optional[Dict[str, Any]]
|
extra kwargs that will be passed to the Hugging Face |
config_kwargs |
Optional[Dict[str, Any]]
|
extra kwargs that will be passed to the Hugging Face |
precision |
Optional[Literal['float32', 'int8', 'uint8', 'binary', 'ubinary']]
|
the dtype that will have the resulting embeddings. Defaults to |
normalize_embeddings |
RuntimeParameter[bool]
|
whether to normalize the embeddings so they have a length
of 1. Defaults to |
Examples:
Generating sentence embeddings:
from distilabel.models import SentenceTransformerEmbeddings
embeddings = SentenceTransformerEmbeddings(model="mixedbread-ai/mxbai-embed-large-v1")
embeddings.load()
results = embeddings.encode(inputs=["distilabel is awesome!", "and Argilla!"])
# [
# [-0.05447685346007347, -0.01623094454407692, ...],
# [4.4889533455716446e-05, 0.044016145169734955, ...],
# ]
Source code in src/distilabel/models/embeddings/sentence_transformers.py
27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 |
|
model_name
property
¶
Returns the name of the model.
load()
¶
Loads the Sentence Transformer model
Source code in src/distilabel/models/embeddings/sentence_transformers.py
encode(inputs)
¶
Generates embeddings for the provided inputs.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
inputs
|
List[str]
|
a list of texts for which an embedding has to be generated. |
required |
Returns:
Type | Description |
---|---|
List[List[Union[int, float]]]
|
The generated embeddings. |
Source code in src/distilabel/models/embeddings/sentence_transformers.py
vLLMEmbeddings
¶
Bases: Embeddings
, CudaDevicePlacementMixin
vllm
library implementation for embedding generation.
Attributes:
Name | Type | Description |
---|---|---|
model |
str
|
the model Hugging Face Hub repo id or a path to a directory containing the model weights and configuration files. |
dtype |
str
|
the data type to use for the model. Defaults to |
trust_remote_code |
bool
|
whether to trust the remote code when loading the model. Defaults
to |
quantization |
Optional[str]
|
the quantization mode to use for the model. Defaults to |
revision |
Optional[str]
|
the revision of the model to load. Defaults to |
enforce_eager |
bool
|
whether to enforce eager execution. Defaults to |
seed |
int
|
the seed to use for the random number generator. Defaults to |
extra_kwargs |
Optional[RuntimeParameter[Dict[str, Any]]]
|
additional dictionary of keyword arguments that will be passed to the
|
_model |
LLM
|
the |
References
Examples:
Generating sentence embeddings:
from distilabel.models import vLLMEmbeddings
embeddings = vLLMEmbeddings(model="intfloat/e5-mistral-7b-instruct")
embeddings.load()
results = embeddings.encode(inputs=["distilabel is awesome!", "and Argilla!"])
# [
# [-0.05447685346007347, -0.01623094454407692, ...],
# [4.4889533455716446e-05, 0.044016145169734955, ...],
# ]
Source code in src/distilabel/models/embeddings/vllm.py
27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 |
|
model_name
property
¶
Returns the name of the model.
load()
¶
Loads the vLLM
model using either the path or the Hugging Face Hub repository id.
Source code in src/distilabel/models/embeddings/vllm.py
unload()
¶
encode(inputs)
¶
Generates embeddings for the provided inputs.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
inputs
|
List[str]
|
a list of texts for which an embedding has to be generated. |
required |
Returns:
Type | Description |
---|---|
List[List[Union[int, float]]]
|
The generated embeddings. |