Skip to content

dataset

prepare_dataset(dataset, strategy='random', sft=False, seed=None, keep_ties=False, **kwargs)

Helper function to prepare a distilabel dataset for training with the standard formats.

Currently supports the PreferenceTask, and binarizes the responses assuming one of two strategies:

  • random: Selects the chosen response based on the highest rating, and for the rejected selects a random response from the remaining ones. Filters the examples in which the chosen rating is equal to the rejected one.
  • worst: Selects the chosen response based on the highest rating, and for the rejected selects the response with the lowest rating. Filters the examples in which the chosen rating is equal to the rejected one.

Take a look at argilla/ultrafeedback-binarized-preferences for more information on binarizing a dataset to prepare it for DPO fine-tuning.

Expected format for a dataset to be trained with DPO as defined in trl's dpo trainer.

Note

Take a look at the Prepare datasets for fine-tuning section in the Concept guides for more information on the binarization process.

Parameters:

Name Type Description Default
dataset CustomDataset

CustomDataset with a PreferenceTask to prepare for Direct Preference Optimization.

required
strategy BinarizationStrategies

Strategy to binarize the data. Defaults to "random".

'random'
sft bool

Whether to add a messages column to the dataset, to be used for Supervised Fine Tuning. If set to True, this messages column will contain the same information as the chosen response. Defaults to False.

False
seed int

Seed for the random generator, in case of random strategy. Defaults to None.

None
keep_ties bool

Whether to keep ties in case the binarization method generated the chosen and rejected responses to have the same rating. Defaults to False.

False
kwargs Any

Extra parameters passed to datasets.Dataset.map.

{}

Returns:

Name Type Description
CustomDataset CustomDataset

Dataset formatted for training with DPO.

Examples:

>>> from datasets import load_dataset
>>> from distilabel.tasks import UltraFeedbackTask
>>> import os
>>> dataset = load_dataset("argilla/DistiCoder-dpo", token=os.getenv("HF_API_TOKEN"), split="train")
>>> dataset.task = UltraFeedbackTask.for_instruction_following()
>>> dataset_binarized = prepare_dataset(dataset, strategy="worst")
Source code in src/distilabel/utils/dataset.py
def prepare_dataset(
    dataset: "CustomDataset",
    strategy: BinarizationStrategies = "random",
    sft: bool = False,
    seed: Optional[int] = None,
    keep_ties: bool = False,
    **kwargs: Any,
) -> "CustomDataset":
    """Helper function to prepare a distilabel dataset for training with the standard formats.

    Currently supports the `PreferenceTask`, and binarizes the responses assuming
    one of two strategies:

    - `random`: Selects the *chosen* response based on the highest rating, and for the
        *rejected* selects a random response from the remaining ones. Filters the examples in which
        the chosen rating is equal to the rejected one.
    - `worst`: Selects the *chosen* response based on the highest rating, and for the
        *rejected* selects the response with the lowest rating. Filters the examples in which the
        chosen rating is equal to the rejected one.

    Take a look at [argilla/ultrafeedback-binarized-preferences](https://huggingface.co/datasets/argilla/ultrafeedback-binarized-preferences)
    for more information on binarizing a dataset to prepare it for DPO fine-tuning.

    Expected format for a dataset to be trained with DPO as defined in trl's
    [dpo trainer](https://huggingface.co/docs/trl/main/en/dpo_trainer#expected-dataset-format).

    Note:
        Take a look at the
        [Prepare datasets for fine-tuning](https://distilabel.argilla.io/latest/technical-reference/pipeline/#prepare-datasets-for-fine-tuning)
        section in the Concept guides for more information on the binarization process.

    Args:
        dataset (CustomDataset):
            CustomDataset with a PreferenceTask to prepare for Direct Preference Optimization.
        strategy (BinarizationStrategies, optional):
            Strategy to binarize the data. Defaults to "random".
        sft (bool, optional):
            Whether to add a `messages` column to the dataset, to be used for Supervised Fine Tuning.
            If set to True, this messages column will contain the same information as the chosen response.
            Defaults to False.
        seed (int, optional): Seed for the random generator, in case of `random` strategy. Defaults to None.
        keep_ties (bool, optional):
            Whether to keep ties in case the binarization method generated the chosen
            and rejected responses to have the same rating. Defaults to False.
        kwargs: Extra parameters passed to `datasets.Dataset.map`.

    Returns:
        CustomDataset: Dataset formatted for training with DPO.

    Examples:
        >>> from datasets import load_dataset
        >>> from distilabel.tasks import UltraFeedbackTask
        >>> import os
        >>> dataset = load_dataset("argilla/DistiCoder-dpo", token=os.getenv("HF_API_TOKEN"), split="train")
        >>> dataset.task = UltraFeedbackTask.for_instruction_following()
        >>> dataset_binarized = prepare_dataset(dataset, strategy="worst")
    """
    from distilabel.tasks.preference.base import PreferenceTask

    if not isinstance(dataset.task, PreferenceTask):
        raise ValueError(
            "This functionality is currently implemented for `PreferenceTask` only."
        )

    remove_columns = [
        "input",
        "generation_model",
        "generations",
        "rating",
        "labelling_model",
        "labelling_prompt",
        "raw_labelling_response",
    ]

    # Remove the rows for which there is no rating
    def remove_incomplete_rows(example):
        if not example["rating"]:
            return False
        if len(example["generations"]) != len(example["rating"]):
            return False
        # TODO(plaguss): Maybe we should remove the examples with less than 2 generations
        # instead of checking after the filtering
        return True

    initial_length = len(dataset)
    dataset = dataset.filter(remove_incomplete_rows)
    if len(dataset) != initial_length:
        logger.info(
            f"Found {initial_length - len(dataset)} examples with no rating or different number of ratings than generations, removing them."
        )

    if len(dataset[0]["generations"]) < 2:
        raise ValueError("The dataset must contain at least 2 generations per example.")

    # If the dataset contains the rationale, grab the content
    if "rationale" in dataset.column_names:
        rationale_column = "rationale"
        remove_columns.append("rationale")
    else:
        rationale_column = None

    ds = _binarize_dataset(
        dataset,
        strategy=strategy,
        seed=seed,
        keep_ties=keep_ties,
        rating_column="rating",
        responses_column="generations",
        rationale_column=rationale_column,
        **kwargs,
    )

    if sft:
        # Adds a column to be used for Supervised Fine Tuning based on the chosen response
        ds = ds.map(
            lambda example: {
                **example,
                "messages": example["chosen"],
            }
        )
    # Imported here to avoid circular imports
    from distilabel.dataset import CustomDataset

    ds = ds.remove_columns(remove_columns)
    ds.__class__ = CustomDataset
    ds.task = dataset.task
    return ds