Skip to content

utils

prepare_dataset(dataset, strategy='random', seed=None, keep_ties=False, **kwargs)

Helper function to prepare a distilabel dataset for training with the standard formats.

Currently supports the PreferenceTask, and binarizes the responses assuming one of two strategies:

  • random: Selects the chosen response based on the highest rating, and for the rejected selects a random response from the remaining ones. Filters the examples in which the chosen rating is equal to the rejected one.
  • worst: Selects the chosen response based on the highest rating, and for the rejected selects the response with the lowest rating. Filters the examples in which the chosen rating is equal to the rejected one.

Take a look at argilla/ultrafeedback-binarized-preferences for more information on binarizing a dataset to prepare it for DPO fine-tuning.

Expected format for a dataset to be trained with DPO as defined in trl's dpo trainer.

Note

Take a look at the Prepare datasets for fine-tuning section in the Concept guides for more information on the binarization process.

Parameters:

Name Type Description Default
dataset CustomDataset

CustomDataset with a PreferenceTask to prepare for Direct Preference Optimization.

required
strategy BinarizationStrategies

Strategy to binarize the data. Defaults to "random".

'random'
seed int

Seed for the random generator, in case of random strategy. Defaults to None.

None
keep_ties bool

Whether to keep ties in case the binarization method generated the chosen and rejected responses to have the same rating. Defaults to False.

False
kwargs Any

Extra parameters passed to datasets.Dataset.map.

{}

Returns:

Name Type Description
CustomDataset CustomDataset

Dataset formatted for training with DPO.

Examples:

>>> from datasets import load_dataset
>>> from distilabel.tasks import UltraFeedbackTask
>>> import os
>>> dataset = load_dataset("argilla/DistiCoder-dpo", token=os.getenv("HF_API_TOKEN"), split="train")
>>> dataset.task = UltraFeedbackTask.for_instruction_following()
>>> dataset_binarized = prepare_dataset(dataset, strategy="worst")
Source code in src/distilabel/utils/dataset.py
def prepare_dataset(
    dataset: "CustomDataset",
    strategy: BinarizationStrategies = "random",
    seed: Optional[int] = None,
    keep_ties: bool = False,
    **kwargs: Any,
) -> "CustomDataset":
    """Helper function to prepare a distilabel dataset for training with the standard formats.

    Currently supports the `PreferenceTask`, and binarizes the responses assuming
    one of two strategies:

    - `random`: Selects the *chosen* response based on the highest rating, and for the
        *rejected* selects a random response from the remaining ones. Filters the examples in which
        the chosen rating is equal to the rejected one.
    - `worst`: Selects the *chosen* response based on the highest rating, and for the
        *rejected* selects the response with the lowest rating. Filters the examples in which the
        chosen rating is equal to the rejected one.

    Take a look at [argilla/ultrafeedback-binarized-preferences](https://huggingface.co/datasets/argilla/ultrafeedback-binarized-preferences)
    for more information on binarizing a dataset to prepare it for DPO fine-tuning.

    Expected format for a dataset to be trained with DPO as defined in trl's
    [dpo trainer](https://huggingface.co/docs/trl/main/en/dpo_trainer#expected-dataset-format).

    Note:
        Take a look at the
        [Prepare datasets for fine-tuning](https://distilabel.argilla.io/latest/technical-reference/pipeline/#prepare-datasets-for-fine-tuning)
        section in the Concept guides for more information on the binarization process.

    Args:
        dataset (CustomDataset):
            CustomDataset with a PreferenceTask to prepare for Direct Preference Optimization.
        strategy (BinarizationStrategies, optional):
            Strategy to binarize the data. Defaults to "random".
        seed (int, optional): Seed for the random generator, in case of `random` strategy. Defaults to None.
        keep_ties (bool, optional):
            Whether to keep ties in case the binarization method generated the chosen
            and rejected responses to have the same rating. Defaults to False.
        kwargs: Extra parameters passed to `datasets.Dataset.map`.

    Returns:
        CustomDataset: Dataset formatted for training with DPO.

    Examples:
        >>> from datasets import load_dataset
        >>> from distilabel.tasks import UltraFeedbackTask
        >>> import os
        >>> dataset = load_dataset("argilla/DistiCoder-dpo", token=os.getenv("HF_API_TOKEN"), split="train")
        >>> dataset.task = UltraFeedbackTask.for_instruction_following()
        >>> dataset_binarized = prepare_dataset(dataset, strategy="worst")
    """
    from distilabel.tasks.preference.base import PreferenceTask

    if not isinstance(dataset.task, PreferenceTask):
        raise ValueError(
            "This functionality is currently implemented for `PreferenceTask` only."
        )

    remove_columns = [
        "input",
        "generation_model",
        "generations",
        "rating",
        "labelling_model",
        "labelling_prompt",
        "raw_labelling_response",
        "rationale",
    ]
    # Remove the rows for which there is no rating
    initial_length = len(dataset)
    dataset = dataset.filter(lambda example: example["rating"])
    if len(dataset) != initial_length:
        logger.info(
            f"Found {initial_length - len(dataset)} examples with no rating, removing them."
        )

    if len(dataset[0]["generations"]) < 2:
        raise ValueError("The dataset must contain at least 2 generations per example.")

    ds = _binarize_dataset(
        dataset,
        strategy=strategy,
        seed=seed,
        keep_ties=keep_ties,
        rating_column="rating",
        responses_column="generations",
        **kwargs,
    )

    # Imported here to avoid circular imports
    from distilabel.dataset import CustomDataset

    ds = ds.remove_columns(remove_columns)
    ds.__class__ = CustomDataset
    ds.task = dataset.task
    return ds