dataset
load_task_from_disk(path)
¶
Loads a task from disk.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
path |
Path
|
The path to the task. |
required |
Returns:
Name | Type | Description |
---|---|---|
Task |
Task
|
The task. |
Source code in src/distilabel/utils/dataset.py
prepare_dataset(dataset, strategy='random', seed=None, keep_ties=False, **kwargs)
¶
Helper function to prepare a distilabel dataset for training with the standard formats.
Currently supports the PreferenceTask
, and binarizes the responses assuming
one of two strategies:
random
: Selects the chosen response based on the highest rating, and for the rejected selects a random response from the remaining ones. Filters the examples in which the chosen rating is equal to the rejected one.worst
: Selects the chosen response based on the highest rating, and for the rejected selects the response with the lowest rating. Filters the examples in which the chosen rating is equal to the rejected one.
Take a look at argilla/ultrafeedback-binarized-preferences for more information on binarizing a dataset to prepare it for DPO fine-tuning.
Expected format for a dataset to be trained with DPO as defined in trl's dpo trainer.
Note
Take a look at the Prepare datasets for fine-tuning section in the Concept guides for more information on the binarization process.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
dataset |
CustomDataset
|
CustomDataset with a PreferenceTask to prepare for Direct Preference Optimization. |
required |
strategy |
BinarizationStrategies
|
Strategy to binarize the data. Defaults to "random". |
'random'
|
seed |
int
|
Seed for the random generator, in case of |
None
|
keep_ties |
bool
|
Whether to keep ties in case the binarization method generated the chosen and rejected responses to have the same rating. Defaults to False. |
False
|
kwargs |
Any
|
Extra parameters passed to |
{}
|
Returns:
Name | Type | Description |
---|---|---|
CustomDataset |
CustomDataset
|
Dataset formatted for training with DPO. |
Examples:
>>> from datasets import load_dataset
>>> from distilabel.tasks import UltraFeedbackTask
>>> import os
>>> dataset = load_dataset("argilla/DistiCoder-dpo", token=os.getenv("HF_API_TOKEN"), split="train")
>>> dataset.task = UltraFeedbackTask.for_instruction_following()
>>> dataset_binarized = prepare_dataset(dataset, strategy="worst")
Source code in src/distilabel/utils/dataset.py
218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 |
|