Skip to content

Task Gallery

This section contains the existing Task subclasses implemented in distilabel.

BitextRetrievalGenerator

Bases: _EmbeddingDataGenerator

Generate bitext retrieval data with an LLM to later on train an embedding model.

BitextRetrievalGenerator is a GeneratorTask that generates bitext retrieval data with an LLM to later on train an embedding model. The task is based on the paper "Improving Text Embeddings with Large Language Models" and the data is generated based on the provided attributes, or randomly sampled if not provided.

Attributes:

Name Type Description
source_language str

The source language of the data to be generated, which can be any of the languages retrieved from the list of XLM-R in the Appendix A of https://aclanthology.org/2020.acl-main.747.pdf.

target_language str

The target language of the data to be generated, which can be any of the languages retrieved from the list of XLM-R in the Appendix A of https://aclanthology.org/2020.acl-main.747.pdf.

unit Optional[Literal['sentence', 'phrase', 'passage']]

The unit of the data to be generated, which can be sentence, phrase, or passage. Defaults to None, meaning that it will be randomly sampled.

difficulty Optional[Literal['elementary school', 'high school', 'college']]

The difficulty of the query to be generated, which can be elementary school, high school, or college. Defaults to None, meaning that it will be randomly sampled.

high_score Optional[Literal['4', '4.5', '5']]

The high score of the query to be generated, which can be 4, 4.5, or 5. Defaults to None, meaning that it will be randomly sampled.

low_score Optional[Literal['2.5', '3', '3.5']]

The low score of the query to be generated, which can be 2.5, 3, or 3.5. Defaults to None, meaning that it will be randomly sampled.

seed Optional[Literal['2.5', '3', '3.5']]

The random seed to be set in case there's any sampling within the format_input method.

Examples:

Generate bitext retrieval data for training embedding models:

```python
from distilabel.pipeline import Pipeline
from distilabel.steps.tasks import BitextRetrievalGenerator

with Pipeline("my-pipeline") as pipeline:
    task = BitextRetrievalGenerator(
        source_language="English",
        target_language="Spanish",
        unit="sentence",
        difficulty="elementary school",
        high_score="4",
        low_score="2.5",
        llm=...,
    )

    ...

    task >> ...
```
Source code in src/distilabel/steps/tasks/improving_text_embeddings.py
class BitextRetrievalGenerator(_EmbeddingDataGenerator):
    """Generate bitext retrieval data with an `LLM` to later on train an embedding model.

    `BitextRetrievalGenerator` is a `GeneratorTask` that generates bitext retrieval data with an
    `LLM` to later on train an embedding model. The task is based on the paper "Improving
    Text Embeddings with Large Language Models" and the data is generated based on the
    provided attributes, or randomly sampled if not provided.

    Attributes:
        source_language: The source language of the data to be generated, which can be any of the languages
            retrieved from the list of XLM-R in the Appendix A of https://aclanthology.org/2020.acl-main.747.pdf.
        target_language: The target language of the data to be generated, which can be any of the languages
            retrieved from the list of XLM-R in the Appendix A of https://aclanthology.org/2020.acl-main.747.pdf.
        unit: The unit of the data to be generated, which can be `sentence`, `phrase`, or `passage`.
            Defaults to `None`, meaning that it will be randomly sampled.
        difficulty: The difficulty of the query to be generated, which can be `elementary school`, `high school`, or `college`.
            Defaults to `None`, meaning that it will be randomly sampled.
        high_score: The high score of the query to be generated, which can be `4`, `4.5`, or `5`.
            Defaults to `None`, meaning that it will be randomly sampled.
        low_score: The low score of the query to be generated, which can be `2.5`, `3`, or `3.5`.
            Defaults to `None`, meaning that it will be randomly sampled.
        seed: The random seed to be set in case there's any sampling within the `format_input` method.

    Examples:

        Generate bitext retrieval data for training embedding models:

        ```python
        from distilabel.pipeline import Pipeline
        from distilabel.steps.tasks import BitextRetrievalGenerator

        with Pipeline("my-pipeline") as pipeline:
            task = BitextRetrievalGenerator(
                source_language="English",
                target_language="Spanish",
                unit="sentence",
                difficulty="elementary school",
                high_score="4",
                low_score="2.5",
                llm=...,
            )

            ...

            task >> ...
        ```
    """

    source_language: str = Field(
        default="English",
        description="The languages are retrieved from the list of XLM-R in the Appendix A of https://aclanthology.org/2020.acl-main.747.pdf",
    )
    target_language: str = Field(
        default=...,
        description="The languages are retrieved from the list of XLM-R in the Appendix A of https://aclanthology.org/2020.acl-main.747.pdf",
    )

    unit: Optional[Literal["sentence", "phrase", "passage"]] = None
    difficulty: Optional[Literal["elementary school", "high school", "college"]] = None
    high_score: Optional[Literal["4", "4.5", "5"]] = None
    low_score: Optional[Literal["2.5", "3", "3.5"]] = None

    _template_name: str = PrivateAttr(default="bitext-retrieval")

    @property
    def prompt(self) -> ChatType:
        """Contains the `prompt` to be used in the `process` method, rendering the `_template`; and
        formatted as an OpenAI formatted chat i.e. a `ChatType`, assuming that there's only one turn,
        being from the user with the content being the rendered `_template`.
        """
        return [
            {
                "role": "user",
                "content": self._template.render(  # type: ignore
                    source_language=self.source_language,
                    target_language=self.target_language,
                    unit=self.unit or random.choice(["sentence", "phrase", "passage"]),
                    difficulty=self.difficulty
                    or random.choice(["elementary school", "high school", "college"]),
                    high_score=self.high_score or random.choice(["4", "4.5", "5"]),
                    low_score=self.low_score or random.choice(["2.5", "3", "3.5"]),
                ).strip(),
            }
        ]  # type: ignore

    @property
    def keys(self) -> List[str]:
        """Contains the `keys` that will be parsed from the `LLM` output into a Python dict."""
        return ["S1", "S2", "S3"]

keys: List[str] property

Contains the keys that will be parsed from the LLM output into a Python dict.

prompt: ChatType property

Contains the prompt to be used in the process method, rendering the _template; and formatted as an OpenAI formatted chat i.e. a ChatType, assuming that there's only one turn, being from the user with the content being the rendered _template.

ChatGeneration

Bases: Task

Generates text based on a conversation.

ChatGeneration is a pre-defined task that defines the messages as the input and generation as the output. This task is used to generate text based on a conversation. The model_name is also returned as part of the output in order to enhance it.

Input columns
  • messages (List[Dict[Literal["role", "content"], str]]): The messages to generate the follow up completion from.
Output columns
  • generation (str): The generated text from the assistant.
  • model_name (str): The model name used to generate the text.
Categories
  • chat-generation
Icon

:material-chat:

Examples:

Generate text from a conversation in OpenAI chat format:

```python
from distilabel.steps.tasks import ChatGeneration
from distilabel.llms.huggingface import InferenceEndpointsLLM

# Consider this as a placeholder for your actual LLM.
chat = ChatGeneration(
    llm=InferenceEndpointsLLM(
        model_id="mistralai/Mistral-7B-Instruct-v0.2",
    )
)

chat.load()

result = next(
    chat.process(
        [
            {
                "messages": [
                    {"role": "user", "content": "How much is 2+2?"},
                ]
            }
        ]
    )
)
# result
# [
#     {
#         'messages': [{'role': 'user', 'content': 'How much is 2+2?'}],
#         'model_name': 'mistralai/Mistral-7B-Instruct-v0.2',
#         'generation': '4',
#     }
# ]
```
Source code in src/distilabel/steps/tasks/text_generation.py
class ChatGeneration(Task):
    """Generates text based on a conversation.

    `ChatGeneration` is a pre-defined task that defines the `messages` as the input
    and `generation` as the output. This task is used to generate text based on a conversation.
    The `model_name` is also returned as part of the output in order to enhance it.

    Input columns:
        - messages (`List[Dict[Literal["role", "content"], str]]`): The messages to generate the
            follow up completion from.

    Output columns:
        - generation (`str`): The generated text from the assistant.
        - model_name (`str`): The model name used to generate the text.

    Categories:
        - chat-generation

    Icon:
        `:material-chat:`

    Examples:

        Generate text from a conversation in OpenAI chat format:

        ```python
        from distilabel.steps.tasks import ChatGeneration
        from distilabel.llms.huggingface import InferenceEndpointsLLM

        # Consider this as a placeholder for your actual LLM.
        chat = ChatGeneration(
            llm=InferenceEndpointsLLM(
                model_id="mistralai/Mistral-7B-Instruct-v0.2",
            )
        )

        chat.load()

        result = next(
            chat.process(
                [
                    {
                        "messages": [
                            {"role": "user", "content": "How much is 2+2?"},
                        ]
                    }
                ]
            )
        )
        # result
        # [
        #     {
        #         'messages': [{'role': 'user', 'content': 'How much is 2+2?'}],
        #         'model_name': 'mistralai/Mistral-7B-Instruct-v0.2',
        #         'generation': '4',
        #     }
        # ]
        ```
    """

    @property
    def inputs(self) -> List[str]:
        """The input for the task are the `messages`."""
        return ["messages"]

    def format_input(self, input: Dict[str, Any]) -> ChatType:
        """The input is formatted as a `ChatType` assuming that the messages provided
        are already formatted that way i.e. following the OpenAI chat format."""

        if not is_openai_format(input["messages"]):
            raise ValueError(
                "Input `messages` must be an OpenAI chat-like format conversation. "
                f"Got: {input['messages']}. Please check: 'https://cookbook.openai.com/examples/how_to_format_inputs_to_chatgpt_models'."
            )

        if input["messages"][-1]["role"] != "user":
            raise ValueError(
                "The last message must be from the user. Please check: "
                "'https://cookbook.openai.com/examples/how_to_format_inputs_to_chatgpt_models'."
            )

        return input["messages"]

    @property
    def outputs(self) -> List[str]:
        """The output for the task is the `generation` and the `model_name`."""
        return ["generation", "model_name"]

    def format_output(
        self, output: Union[str, None], input: Dict[str, Any]
    ) -> Dict[str, Any]:
        """The output is formatted as a dictionary with the `generation`. The `model_name`
        will be automatically included within the `process` method of `Task`."""
        return {"generation": output}

inputs: List[str] property

The input for the task are the messages.

outputs: List[str] property

The output for the task is the generation and the model_name.

format_input(input)

The input is formatted as a ChatType assuming that the messages provided are already formatted that way i.e. following the OpenAI chat format.

Source code in src/distilabel/steps/tasks/text_generation.py
def format_input(self, input: Dict[str, Any]) -> ChatType:
    """The input is formatted as a `ChatType` assuming that the messages provided
    are already formatted that way i.e. following the OpenAI chat format."""

    if not is_openai_format(input["messages"]):
        raise ValueError(
            "Input `messages` must be an OpenAI chat-like format conversation. "
            f"Got: {input['messages']}. Please check: 'https://cookbook.openai.com/examples/how_to_format_inputs_to_chatgpt_models'."
        )

    if input["messages"][-1]["role"] != "user":
        raise ValueError(
            "The last message must be from the user. Please check: "
            "'https://cookbook.openai.com/examples/how_to_format_inputs_to_chatgpt_models'."
        )

    return input["messages"]

format_output(output, input)

The output is formatted as a dictionary with the generation. The model_name will be automatically included within the process method of Task.

Source code in src/distilabel/steps/tasks/text_generation.py
def format_output(
    self, output: Union[str, None], input: Dict[str, Any]
) -> Dict[str, Any]:
    """The output is formatted as a dictionary with the `generation`. The `model_name`
    will be automatically included within the `process` method of `Task`."""
    return {"generation": output}

ComplexityScorer

Bases: Task

Score instructions based on their complexity using an LLM.

ComplexityScorer is a pre-defined task used to rank a list of instructions based in their complexity. It's an implementation of the complexity score task from the paper 'What Makes Good Data for Alignment? A Comprehensive Study of Automatic Data Selection in Instruction Tuning'.

Attributes:

Name Type Description
_template Union[Template, None]

a Jinja2 template used to format the input for the LLM.

Input columns
  • instructions (List[str]): The list of instructions to be scored.
Output columns
  • scores (List[float]): The score for each instruction.
  • model_name (str): The model name used to generate the scores.
Categories
  • scorer
  • complexity
  • instruction
References

Examples:

Evaluate the complexity of your instructions:

```python
from distilabel.steps.tasks import ComplexityScorer
from distilabel.llms.huggingface import InferenceEndpointsLLM

# Consider this as a placeholder for your actual LLM.
scorer = ComplexityScorer(
    llm=InferenceEndpointsLLM(
        model_id="mistralai/Mistral-7B-Instruct-v0.2",
    )
)

scorer.load()

result = next(
    scorer.process(
        [{"instructions": ["plain instruction", "highly complex instruction"]}]
    )
)
# result
# [{'instructions': ['plain instruction', 'highly complex instruction'], 'model_name': 'test', 'scores': [1, 5], 'distilabel_metadata': {'raw_output_complexity_scorer_0': 'output'}}]
```

Citations:

```
@misc{liu2024makesgooddataalignment,
    title={What Makes Good Data for Alignment? A Comprehensive Study of Automatic Data Selection in Instruction Tuning},
    author={Wei Liu and Weihao Zeng and Keqing He and Yong Jiang and Junxian He},
    year={2024},
    eprint={2312.15685},
    archivePrefix={arXiv},
    primaryClass={cs.CL},
    url={https://arxiv.org/abs/2312.15685},
}
```
Source code in src/distilabel/steps/tasks/complexity_scorer.py
class ComplexityScorer(Task):
    """Score instructions based on their complexity using an `LLM`.

    `ComplexityScorer` is a pre-defined task used to rank a list of instructions based in
    their complexity. It's an implementation of the complexity score task from the paper
    'What Makes Good Data for Alignment? A Comprehensive Study of Automatic Data Selection
    in Instruction Tuning'.

    Attributes:
        _template: a Jinja2 template used to format the input for the LLM.

    Input columns:
        - instructions (`List[str]`): The list of instructions to be scored.

    Output columns:
        - scores (`List[float]`): The score for each instruction.
        - model_name (`str`): The model name used to generate the scores.

    Categories:
        - scorer
        - complexity
        - instruction

    References:
        - [`What Makes Good Data for Alignment? A Comprehensive Study of Automatic Data Selection in Instruction Tuning`](https://arxiv.org/abs/2312.15685)

    Examples:

        Evaluate the complexity of your instructions:

        ```python
        from distilabel.steps.tasks import ComplexityScorer
        from distilabel.llms.huggingface import InferenceEndpointsLLM

        # Consider this as a placeholder for your actual LLM.
        scorer = ComplexityScorer(
            llm=InferenceEndpointsLLM(
                model_id="mistralai/Mistral-7B-Instruct-v0.2",
            )
        )

        scorer.load()

        result = next(
            scorer.process(
                [{"instructions": ["plain instruction", "highly complex instruction"]}]
            )
        )
        # result
        # [{'instructions': ['plain instruction', 'highly complex instruction'], 'model_name': 'test', 'scores': [1, 5], 'distilabel_metadata': {'raw_output_complexity_scorer_0': 'output'}}]
        ```

    Citations:

        ```
        @misc{liu2024makesgooddataalignment,
            title={What Makes Good Data for Alignment? A Comprehensive Study of Automatic Data Selection in Instruction Tuning},
            author={Wei Liu and Weihao Zeng and Keqing He and Yong Jiang and Junxian He},
            year={2024},
            eprint={2312.15685},
            archivePrefix={arXiv},
            primaryClass={cs.CL},
            url={https://arxiv.org/abs/2312.15685},
        }
        ```
    """

    _template: Union[Template, None] = PrivateAttr(...)

    def load(self) -> None:
        """Loads the Jinja2 template."""
        super().load()

        _path = str(
            importlib_resources.files("distilabel")
            / "steps"
            / "tasks"
            / "templates"
            / "complexity-scorer.jinja2"
        )

        self._template = Template(open(_path).read())

    @property
    def inputs(self) -> List[str]:
        """The inputs for the task are the `instructions`."""
        return ["instructions"]

    def format_input(self, input: Dict[str, Any]) -> "ChatType":
        """The input is formatted as a `ChatType` assuming that the instruction
        is the first interaction from the user within a conversation."""
        return [
            {
                "role": "user",
                "content": self._template.render(instructions=input["instructions"]),  # type: ignore
            }
        ]

    @property
    def outputs(self) -> List[str]:
        """The output for the task are: a list of `scores` containing the complexity score for each
        instruction in `instructions`, and the `model_name`."""
        return ["scores", "model_name"]

    def format_output(
        self, output: Union[str, None], input: Dict[str, Any]
    ) -> Dict[str, Any]:
        """The output is formatted as a list with the score of each instruction.

        Args:
            output: the raw output of the LLM.
            input: the input to the task. Used for obtaining the number of responses.

        Returns:
            A dict with the key `scores` containing the scores for each instruction.
        """
        if output is None:
            return {"scores": [None] * len(input["instructions"])}

        scores = []
        score_lines = output.split("\n")
        for i, line in enumerate(score_lines):
            match = _PARSE_SCORE_LINE_REGEX.match(line)
            score = float(match.group(1)) if match else None
            scores.append(score)
            if i == len(input["instructions"]) - 1:
                break
        return {"scores": scores}

inputs: List[str] property

The inputs for the task are the instructions.

outputs: List[str] property

The output for the task are: a list of scores containing the complexity score for each instruction in instructions, and the model_name.

format_input(input)

The input is formatted as a ChatType assuming that the instruction is the first interaction from the user within a conversation.

Source code in src/distilabel/steps/tasks/complexity_scorer.py
def format_input(self, input: Dict[str, Any]) -> "ChatType":
    """The input is formatted as a `ChatType` assuming that the instruction
    is the first interaction from the user within a conversation."""
    return [
        {
            "role": "user",
            "content": self._template.render(instructions=input["instructions"]),  # type: ignore
        }
    ]

format_output(output, input)

The output is formatted as a list with the score of each instruction.

Parameters:

Name Type Description Default
output Union[str, None]

the raw output of the LLM.

required
input Dict[str, Any]

the input to the task. Used for obtaining the number of responses.

required

Returns:

Type Description
Dict[str, Any]

A dict with the key scores containing the scores for each instruction.

Source code in src/distilabel/steps/tasks/complexity_scorer.py
def format_output(
    self, output: Union[str, None], input: Dict[str, Any]
) -> Dict[str, Any]:
    """The output is formatted as a list with the score of each instruction.

    Args:
        output: the raw output of the LLM.
        input: the input to the task. Used for obtaining the number of responses.

    Returns:
        A dict with the key `scores` containing the scores for each instruction.
    """
    if output is None:
        return {"scores": [None] * len(input["instructions"])}

    scores = []
    score_lines = output.split("\n")
    for i, line in enumerate(score_lines):
        match = _PARSE_SCORE_LINE_REGEX.match(line)
        score = float(match.group(1)) if match else None
        scores.append(score)
        if i == len(input["instructions"]) - 1:
            break
    return {"scores": scores}

load()

Loads the Jinja2 template.

Source code in src/distilabel/steps/tasks/complexity_scorer.py
def load(self) -> None:
    """Loads the Jinja2 template."""
    super().load()

    _path = str(
        importlib_resources.files("distilabel")
        / "steps"
        / "tasks"
        / "templates"
        / "complexity-scorer.jinja2"
    )

    self._template = Template(open(_path).read())

EvolComplexity

Bases: EvolInstruct

Evolve instructions to make them more complex using an LLM.

EvolComplexity is a task that evolves instructions to make them more complex, and it is based in the EvolInstruct task, using slight different prompts, but the exact same evolutionary approach.

Attributes:

Name Type Description
num_instructions

The number of instructions to be generated.

generate_answers

Whether to generate answers for the instructions or not. Defaults to False.

mutation_templates Dict[str, str]

The mutation templates to be used for the generation of the instructions.

min_length Dict[str, str]

Defines the length (in bytes) that the generated instruction needs to be higher than, to be considered valid. Defaults to 512.

max_length Dict[str, str]

Defines the length (in bytes) that the generated instruction needs to be lower than, to be considered valid. Defaults to 1024.

seed Dict[str, str]

The seed to be set for numpy in order to randomly pick a mutation method. Defaults to 42.

Runtime parameters
  • min_length: Defines the length (in bytes) that the generated instruction needs to be higher than, to be considered valid.
  • max_length: Defines the length (in bytes) that the generated instruction needs to be lower than, to be considered valid.
  • seed: The number of evolutions to be run.
Input columns
  • instruction (str): The instruction to evolve.
Output columns
  • evolved_instruction (str): The evolved instruction.
  • answer (str, optional): The answer to the instruction if generate_answers=True.
  • model_name (str): The name of the LLM used to evolve the instructions.
Categories
  • evol
  • instruction
  • deita
References

Examples:

Evolve an instruction using an LLM:

```python
from distilabel.steps.tasks import EvolComplexity
from distilabel.llms.huggingface import InferenceEndpointsLLM

# Consider this as a placeholder for your actual LLM.
evol_complexity = EvolComplexity(
    llm=InferenceEndpointsLLM(
        model_id="mistralai/Mistral-7B-Instruct-v0.2",
    ),
    num_evolutions=2,
)

evol_complexity.load()

result = next(evol_complexity.process([{"instruction": "common instruction"}]))
# result
# [{'instruction': 'common instruction', 'evolved_instruction': 'evolved instruction', 'model_name': 'model_name'}]
```

Citations:

```
@misc{liu2024makesgooddataalignment,
    title={What Makes Good Data for Alignment? A Comprehensive Study of Automatic Data Selection in Instruction Tuning},
    author={Wei Liu and Weihao Zeng and Keqing He and Yong Jiang and Junxian He},
    year={2024},
    eprint={2312.15685},
    archivePrefix={arXiv},
    primaryClass={cs.CL},
    url={https://arxiv.org/abs/2312.15685},
}
```

```
@misc{xu2023wizardlmempoweringlargelanguage,
    title={WizardLM: Empowering Large Language Models to Follow Complex Instructions},
    author={Can Xu and Qingfeng Sun and Kai Zheng and Xiubo Geng and Pu Zhao and Jiazhan Feng and Chongyang Tao and Daxin Jiang},
    year={2023},
    eprint={2304.12244},
    archivePrefix={arXiv},
    primaryClass={cs.CL},
    url={https://arxiv.org/abs/2304.12244},
}
```
Source code in src/distilabel/steps/tasks/evol_instruct/evol_complexity/base.py
class EvolComplexity(EvolInstruct):
    """Evolve instructions to make them more complex using an `LLM`.

    `EvolComplexity` is a task that evolves instructions to make them more complex,
    and it is based in the EvolInstruct task, using slight different prompts, but the
    exact same evolutionary approach.

    Attributes:
        num_instructions: The number of instructions to be generated.
        generate_answers: Whether to generate answers for the instructions or not. Defaults
            to `False`.
        mutation_templates: The mutation templates to be used for the generation of the
            instructions.
        min_length: Defines the length (in bytes) that the generated instruction needs to
            be higher than, to be considered valid. Defaults to `512`.
        max_length: Defines the length (in bytes) that the generated instruction needs to
            be lower than, to be considered valid. Defaults to `1024`.
        seed: The seed to be set for `numpy` in order to randomly pick a mutation method.
            Defaults to `42`.

    Runtime parameters:
        - `min_length`: Defines the length (in bytes) that the generated instruction needs to be higher than, to be considered valid.
        - `max_length`: Defines the length (in bytes) that the generated instruction needs to be lower than, to be considered valid.
        - `seed`: The number of evolutions to be run.

    Input columns:
        - instruction (`str`): The instruction to evolve.

    Output columns:
        - evolved_instruction (`str`): The evolved instruction.
        - answer (`str`, optional): The answer to the instruction if `generate_answers=True`.
        - model_name (`str`): The name of the LLM used to evolve the instructions.

    Categories:
        - evol
        - instruction
        - deita

    References:
        - [What Makes Good Data for Alignment? A Comprehensive Study of Automatic Data Selection in Instruction Tuning](https://arxiv.org/abs/2312.15685)
        - [WizardLM: Empowering Large Language Models to Follow Complex Instructions](https://arxiv.org/abs/2304.12244)

    Examples:

        Evolve an instruction using an LLM:

        ```python
        from distilabel.steps.tasks import EvolComplexity
        from distilabel.llms.huggingface import InferenceEndpointsLLM

        # Consider this as a placeholder for your actual LLM.
        evol_complexity = EvolComplexity(
            llm=InferenceEndpointsLLM(
                model_id="mistralai/Mistral-7B-Instruct-v0.2",
            ),
            num_evolutions=2,
        )

        evol_complexity.load()

        result = next(evol_complexity.process([{"instruction": "common instruction"}]))
        # result
        # [{'instruction': 'common instruction', 'evolved_instruction': 'evolved instruction', 'model_name': 'model_name'}]
        ```

    Citations:

        ```
        @misc{liu2024makesgooddataalignment,
            title={What Makes Good Data for Alignment? A Comprehensive Study of Automatic Data Selection in Instruction Tuning},
            author={Wei Liu and Weihao Zeng and Keqing He and Yong Jiang and Junxian He},
            year={2024},
            eprint={2312.15685},
            archivePrefix={arXiv},
            primaryClass={cs.CL},
            url={https://arxiv.org/abs/2312.15685},
        }
        ```

        ```
        @misc{xu2023wizardlmempoweringlargelanguage,
            title={WizardLM: Empowering Large Language Models to Follow Complex Instructions},
            author={Can Xu and Qingfeng Sun and Kai Zheng and Xiubo Geng and Pu Zhao and Jiazhan Feng and Chongyang Tao and Daxin Jiang},
            year={2023},
            eprint={2304.12244},
            archivePrefix={arXiv},
            primaryClass={cs.CL},
            url={https://arxiv.org/abs/2304.12244},
        }
        ```
    """

    mutation_templates: Dict[str, str] = MUTATION_TEMPLATES

EvolComplexityGenerator

Bases: EvolInstructGenerator

Generate evolved instructions with increased complexity using an LLM.

EvolComplexityGenerator is a generation task that evolves instructions to make them more complex, and it is based in the EvolInstruct task, but using slight different prompts, but the exact same evolutionary approach.

Attributes:

Name Type Description
num_instructions

The number of instructions to be generated.

generate_answers

Whether to generate answers for the instructions or not. Defaults to False.

mutation_templates Dict[str, str]

The mutation templates to be used for the generation of the instructions.

min_length Dict[str, str]

Defines the length (in bytes) that the generated instruction needs to be higher than, to be considered valid. Defaults to 512.

max_length Dict[str, str]

Defines the length (in bytes) that the generated instruction needs to be lower than, to be considered valid. Defaults to 1024.

seed Dict[str, str]

The seed to be set for numpy in order to randomly pick a mutation method. Defaults to 42.

Runtime parameters
  • min_length: Defines the length (in bytes) that the generated instruction needs to be higher than, to be considered valid.
  • max_length: Defines the length (in bytes) that the generated instruction needs to be lower than, to be considered valid.
  • seed: The number of evolutions to be run.
Output columns
  • instruction (str): The evolved instruction.
  • answer (str, optional): The answer to the instruction if generate_answers=True.
  • model_name (str): The name of the LLM used to evolve the instructions.
Categories
  • evol
  • instruction
  • generation
  • deita
References

Examples:

Generate evolved instructions without initial instructions:

```python
from distilabel.steps.tasks import EvolComplexityGenerator
from distilabel.llms.huggingface import InferenceEndpointsLLM

# Consider this as a placeholder for your actual LLM.
evol_complexity_generator = EvolComplexityGenerator(
    llm=InferenceEndpointsLLM(
        model_id="mistralai/Mistral-7B-Instruct-v0.2",
    ),
    num_instructions=2,
)

evol_complexity_generator.load()

result = next(scorer.process())
# result
# [{'instruction': 'generated instruction', 'model_name': 'test'}]
```

Citations:

```
@misc{liu2024makesgooddataalignment,
    title={What Makes Good Data for Alignment? A Comprehensive Study of Automatic Data Selection in Instruction Tuning},
    author={Wei Liu and Weihao Zeng and Keqing He and Yong Jiang and Junxian He},
    year={2024},
    eprint={2312.15685},
    archivePrefix={arXiv},
    primaryClass={cs.CL},
    url={https://arxiv.org/abs/2312.15685},
}
```

```
@misc{xu2023wizardlmempoweringlargelanguage,
    title={WizardLM: Empowering Large Language Models to Follow Complex Instructions},
    author={Can Xu and Qingfeng Sun and Kai Zheng and Xiubo Geng and Pu Zhao and Jiazhan Feng and Chongyang Tao and Daxin Jiang},
    year={2023},
    eprint={2304.12244},
    archivePrefix={arXiv},
    primaryClass={cs.CL},
    url={https://arxiv.org/abs/2304.12244},
}
```
Source code in src/distilabel/steps/tasks/evol_instruct/evol_complexity/generator.py
class EvolComplexityGenerator(EvolInstructGenerator):
    """Generate evolved instructions with increased complexity using an `LLM`.

    `EvolComplexityGenerator` is a generation task that evolves instructions to make
    them more complex, and it is based in the EvolInstruct task, but using slight different
    prompts, but the exact same evolutionary approach.

    Attributes:
        num_instructions: The number of instructions to be generated.
        generate_answers: Whether to generate answers for the instructions or not. Defaults
            to `False`.
        mutation_templates: The mutation templates to be used for the generation of the
            instructions.
        min_length: Defines the length (in bytes) that the generated instruction needs to
            be higher than, to be considered valid. Defaults to `512`.
        max_length: Defines the length (in bytes) that the generated instruction needs to
            be lower than, to be considered valid. Defaults to `1024`.
        seed: The seed to be set for `numpy` in order to randomly pick a mutation method.
            Defaults to `42`.

    Runtime parameters:
        - `min_length`: Defines the length (in bytes) that the generated instruction needs to be higher than, to be considered valid.
        - `max_length`: Defines the length (in bytes) that the generated instruction needs to be lower than, to be considered valid.
        - `seed`: The number of evolutions to be run.

    Output columns:
        - instruction (`str`): The evolved instruction.
        - answer (`str`, optional): The answer to the instruction if `generate_answers=True`.
        - model_name (`str`): The name of the LLM used to evolve the instructions.

    Categories:
        - evol
        - instruction
        - generation
        - deita

    References:
        - [What Makes Good Data for Alignment? A Comprehensive Study of Automatic Data Selection in Instruction Tuning](https://arxiv.org/abs/2312.15685)
        - [WizardLM: Empowering Large Language Models to Follow Complex Instructions](https://arxiv.org/abs/2304.12244)

    Examples:

        Generate evolved instructions without initial instructions:

        ```python
        from distilabel.steps.tasks import EvolComplexityGenerator
        from distilabel.llms.huggingface import InferenceEndpointsLLM

        # Consider this as a placeholder for your actual LLM.
        evol_complexity_generator = EvolComplexityGenerator(
            llm=InferenceEndpointsLLM(
                model_id="mistralai/Mistral-7B-Instruct-v0.2",
            ),
            num_instructions=2,
        )

        evol_complexity_generator.load()

        result = next(scorer.process())
        # result
        # [{'instruction': 'generated instruction', 'model_name': 'test'}]
        ```

    Citations:

        ```
        @misc{liu2024makesgooddataalignment,
            title={What Makes Good Data for Alignment? A Comprehensive Study of Automatic Data Selection in Instruction Tuning},
            author={Wei Liu and Weihao Zeng and Keqing He and Yong Jiang and Junxian He},
            year={2024},
            eprint={2312.15685},
            archivePrefix={arXiv},
            primaryClass={cs.CL},
            url={https://arxiv.org/abs/2312.15685},
        }
        ```

        ```
        @misc{xu2023wizardlmempoweringlargelanguage,
            title={WizardLM: Empowering Large Language Models to Follow Complex Instructions},
            author={Can Xu and Qingfeng Sun and Kai Zheng and Xiubo Geng and Pu Zhao and Jiazhan Feng and Chongyang Tao and Daxin Jiang},
            year={2023},
            eprint={2304.12244},
            archivePrefix={arXiv},
            primaryClass={cs.CL},
            url={https://arxiv.org/abs/2304.12244},
        }
        ```
    """

    mutation_templates: Dict[str, str] = GENERATION_MUTATION_TEMPLATES

EvolInstruct

Bases: Task

Evolve instructions using an LLM.

WizardLM: Empowering Large Language Models to Follow Complex Instructions

Attributes:

Name Type Description
num_evolutions int

The number of evolutions to be performed.

store_evolutions bool

Whether to store all the evolutions or just the last one. Defaults to False.

generate_answers bool

Whether to generate answers for the evolved instructions. Defaults to False.

include_original_instruction bool

Whether to include the original instruction in the evolved_instructions output column. Defaults to False.

mutation_templates Dict[str, str]

The mutation templates to be used for evolving the instructions. Defaults to the ones provided in the utils.py file.

seed RuntimeParameter[int]

The seed to be set for numpy in order to randomly pick a mutation method. Defaults to 42.

Runtime parameters
  • seed: The seed to be set for numpy in order to randomly pick a mutation method.
Input columns
  • instruction (str): The instruction to evolve.
Output columns
  • evolved_instruction (str): The evolved instruction if store_evolutions=False.
  • evolved_instructions (List[str]): The evolved instructions if store_evolutions=True.
  • model_name (str): The name of the LLM used to evolve the instructions.
  • answer (str): The answer to the evolved instruction if generate_answers=True and store_evolutions=False.
  • answers (List[str]): The answers to the evolved instructions if generate_answers=True and store_evolutions=True.
Categories
  • evol
  • instruction
References

Examples:

Evolve an instruction using an LLM:

```python
from distilabel.steps.tasks import EvolInstruct
from distilabel.llms.huggingface import InferenceEndpointsLLM

# Consider this as a placeholder for your actual LLM.
evol_instruct = EvolInstruct(
    llm=InferenceEndpointsLLM(
        model_id="mistralai/Mistral-7B-Instruct-v0.2",
    ),
    num_evolutions=2,
)

evol_instruct.load()

result = next(evol_instruct.process([{"instruction": "common instruction"}]))
# result
# [{'instruction': 'common instruction', 'evolved_instruction': 'evolved instruction', 'model_name': 'model_name'}]
```

Keep the iterations of the evolutions:

```python
from distilabel.steps.tasks import EvolInstruct
from distilabel.llms.huggingface import InferenceEndpointsLLM

# Consider this as a placeholder for your actual LLM.
evol_instruct = EvolInstruct(
    llm=InferenceEndpointsLLM(
        model_id="mistralai/Mistral-7B-Instruct-v0.2",
    ),
    num_evolutions=2,
    store_evolutions=True,
)

evol_instruct.load()

result = next(evol_instruct.process([{"instruction": "common instruction"}]))
# result
# [
#     {
#         'instruction': 'common instruction',
#         'evolved_instructions': ['initial evolution', 'final evolution'],
#         'model_name': 'model_name'
#     }
# ]
```

Generate answers for the instructions in a single step:

```python
from distilabel.steps.tasks import EvolInstruct
from distilabel.llms.huggingface import InferenceEndpointsLLM

# Consider this as a placeholder for your actual LLM.
evol_instruct = EvolInstruct(
    llm=InferenceEndpointsLLM(
        model_id="mistralai/Mistral-7B-Instruct-v0.2",
    ),
    num_evolutions=2,
    generate_answers=True,
)

evol_instruct.load()

result = next(evol_instruct.process([{"instruction": "common instruction"}]))
# result
# [
#     {
#         'instruction': 'common instruction',
#         'evolved_instruction': 'evolved instruction',
#         'answer': 'answer to the instruction',
#         'model_name': 'model_name'
#     }
# ]
```

Citations:

```
@misc{xu2023wizardlmempoweringlargelanguage,
    title={WizardLM: Empowering Large Language Models to Follow Complex Instructions},
    author={Can Xu and Qingfeng Sun and Kai Zheng and Xiubo Geng and Pu Zhao and Jiazhan Feng and Chongyang Tao and Daxin Jiang},
    year={2023},
    eprint={2304.12244},
    archivePrefix={arXiv},
    primaryClass={cs.CL},
    url={https://arxiv.org/abs/2304.12244},
}
```
Source code in src/distilabel/steps/tasks/evol_instruct/base.py
class EvolInstruct(Task):
    """Evolve instructions using an `LLM`.

    WizardLM: Empowering Large Language Models to Follow Complex Instructions

    Attributes:
        num_evolutions: The number of evolutions to be performed.
        store_evolutions: Whether to store all the evolutions or just the last one. Defaults
            to `False`.
        generate_answers: Whether to generate answers for the evolved instructions. Defaults
            to `False`.
        include_original_instruction: Whether to include the original instruction in the
            `evolved_instructions` output column. Defaults to `False`.
        mutation_templates: The mutation templates to be used for evolving the instructions.
            Defaults to the ones provided in the `utils.py` file.
        seed: The seed to be set for `numpy` in order to randomly pick a mutation method.
            Defaults to `42`.

    Runtime parameters:
        - `seed`: The seed to be set for `numpy` in order to randomly pick a mutation method.

    Input columns:
        - instruction (`str`): The instruction to evolve.

    Output columns:
        - evolved_instruction (`str`): The evolved instruction if `store_evolutions=False`.
        - evolved_instructions (`List[str]`): The evolved instructions if `store_evolutions=True`.
        - model_name (`str`): The name of the LLM used to evolve the instructions.
        - answer (`str`): The answer to the evolved instruction if `generate_answers=True`
            and `store_evolutions=False`.
        - answers (`List[str]`): The answers to the evolved instructions if `generate_answers=True`
            and `store_evolutions=True`.

    Categories:
        - evol
        - instruction

    References:
        - [WizardLM: Empowering Large Language Models to Follow Complex Instructions](https://arxiv.org/abs/2304.12244)
        - [GitHub: h2oai/h2o-wizardlm](https://github.com/h2oai/h2o-wizardlm)

    Examples:

        Evolve an instruction using an LLM:

        ```python
        from distilabel.steps.tasks import EvolInstruct
        from distilabel.llms.huggingface import InferenceEndpointsLLM

        # Consider this as a placeholder for your actual LLM.
        evol_instruct = EvolInstruct(
            llm=InferenceEndpointsLLM(
                model_id="mistralai/Mistral-7B-Instruct-v0.2",
            ),
            num_evolutions=2,
        )

        evol_instruct.load()

        result = next(evol_instruct.process([{"instruction": "common instruction"}]))
        # result
        # [{'instruction': 'common instruction', 'evolved_instruction': 'evolved instruction', 'model_name': 'model_name'}]
        ```

        Keep the iterations of the evolutions:

        ```python
        from distilabel.steps.tasks import EvolInstruct
        from distilabel.llms.huggingface import InferenceEndpointsLLM

        # Consider this as a placeholder for your actual LLM.
        evol_instruct = EvolInstruct(
            llm=InferenceEndpointsLLM(
                model_id="mistralai/Mistral-7B-Instruct-v0.2",
            ),
            num_evolutions=2,
            store_evolutions=True,
        )

        evol_instruct.load()

        result = next(evol_instruct.process([{"instruction": "common instruction"}]))
        # result
        # [
        #     {
        #         'instruction': 'common instruction',
        #         'evolved_instructions': ['initial evolution', 'final evolution'],
        #         'model_name': 'model_name'
        #     }
        # ]
        ```

        Generate answers for the instructions in a single step:

        ```python
        from distilabel.steps.tasks import EvolInstruct
        from distilabel.llms.huggingface import InferenceEndpointsLLM

        # Consider this as a placeholder for your actual LLM.
        evol_instruct = EvolInstruct(
            llm=InferenceEndpointsLLM(
                model_id="mistralai/Mistral-7B-Instruct-v0.2",
            ),
            num_evolutions=2,
            generate_answers=True,
        )

        evol_instruct.load()

        result = next(evol_instruct.process([{"instruction": "common instruction"}]))
        # result
        # [
        #     {
        #         'instruction': 'common instruction',
        #         'evolved_instruction': 'evolved instruction',
        #         'answer': 'answer to the instruction',
        #         'model_name': 'model_name'
        #     }
        # ]
        ```

    Citations:

        ```
        @misc{xu2023wizardlmempoweringlargelanguage,
            title={WizardLM: Empowering Large Language Models to Follow Complex Instructions},
            author={Can Xu and Qingfeng Sun and Kai Zheng and Xiubo Geng and Pu Zhao and Jiazhan Feng and Chongyang Tao and Daxin Jiang},
            year={2023},
            eprint={2304.12244},
            archivePrefix={arXiv},
            primaryClass={cs.CL},
            url={https://arxiv.org/abs/2304.12244},
        }
        ```
    """

    num_evolutions: int
    store_evolutions: bool = False
    generate_answers: bool = False
    include_original_instruction: bool = False
    mutation_templates: Dict[str, str] = MUTATION_TEMPLATES

    seed: RuntimeParameter[int] = Field(
        default=42,
        description="As `numpy` is being used in order to randomly pick a mutation method, then is nice to seed a random seed.",
    )

    @property
    def inputs(self) -> List[str]:
        """The input for the task is the `instruction`."""
        return ["instruction"]

    def format_input(self, input: str) -> ChatType:  # type: ignore
        """The input is formatted as a `ChatType` assuming that the instruction
        is the first interaction from the user within a conversation. And the
        `system_prompt` is added as the first message if it exists."""
        return [{"role": "user", "content": input}]

    @property
    def outputs(self) -> List[str]:
        """The output for the task are the `evolved_instruction/s`, the `answer` if `generate_answers=True`
        and the `model_name`."""
        # TODO: having to define a `model_name` column every time as the `Task.outputs` is not ideal,
        # this could be handled always and the value could be included within the DAG validation when
        # a `Task` is used, since all the `Task` subclasses will have an `llm` with a `model_name` attr.
        _outputs = [
            (
                "evolved_instruction"
                if not self.store_evolutions
                else "evolved_instructions"
            ),
            "model_name",
        ]
        if self.generate_answers:
            _outputs.append("answer" if not self.store_evolutions else "answers")
        return _outputs

    @override
    def format_output(  # type: ignore
        self, instructions: Union[str, List[str]], answers: Optional[List[str]] = None
    ) -> Dict[str, Any]:  # type: ignore
        """The output for the task is a dict with: `evolved_instruction` or `evolved_instructions`,
        depending whether the value is either `False` or `True` for `store_evolutions`, respectively;
        `answer` if `generate_answers=True`; and, finally, the `model_name`.

        Args:
            instructions: The instructions to be included within the output.
            answers: The answers to be included within the output if `generate_answers=True`.

        Returns:
            If `store_evolutions=False` and `generate_answers=True` return {"evolved_instruction": ..., "model_name": ..., "answer": ...};
            if `store_evolutions=True` and `generate_answers=True` return {"evolved_instructions": ..., "model_name": ..., "answer": ...};
            if `store_evolutions=False` and `generate_answers=False` return {"evolved_instruction": ..., "model_name": ...};
            if `store_evolutions=True` and `generate_answers=False` return {"evolved_instructions": ..., "model_name": ...}.
        """
        _output = {}
        if not self.store_evolutions:
            _output["evolved_instruction"] = instructions[-1]
        else:
            _output["evolved_instructions"] = instructions

        if self.generate_answers and answers:
            if not self.store_evolutions:
                _output["answer"] = answers[-1]
            else:
                _output["answers"] = answers

        _output["model_name"] = self.llm.model_name
        return _output

    @property
    def mutation_templates_names(self) -> List[str]:
        """Returns the names i.e. keys of the provided `mutation_templates`."""
        return list(self.mutation_templates.keys())

    def _apply_random_mutation(self, instruction: str) -> str:
        """Applies a random mutation from the ones provided as part of the `mutation_templates`
        enum, and returns the provided instruction within the mutation prompt.

        Args:
            instruction: The instruction to be included within the mutation prompt.

        Returns:
            A random mutation prompt with the provided instruction.
        """
        mutation = np.random.choice(self.mutation_templates_names)
        return self.mutation_templates[mutation].replace("<PROMPT>", instruction)  # type: ignore

    def _evolve_instructions(self, inputs: "StepInput") -> List[List[str]]:
        """Evolves the instructions provided as part of the inputs of the task.

        Args:
            inputs: A list of Python dictionaries with the inputs of the task.

        Returns:
            A list where each item is a list with either the last evolved instruction if
            `store_evolutions=False` or all the evolved instructions if `store_evolutions=True`.
        """

        instructions: List[List[str]] = [[input["instruction"]] for input in inputs]

        for iter_no in range(self.num_evolutions):
            formatted_prompts = []
            for instruction in instructions:
                formatted_prompts.append(self._apply_random_mutation(instruction[-1]))

            formatted_prompts = [
                self.format_input(prompt) for prompt in formatted_prompts
            ]
            generated_prompts = flatten_responses(
                self.llm.generate(
                    formatted_prompts,
                    **self.llm.generation_kwargs,  # type: ignore
                )
            )

            evolved_instructions = []
            for generated_prompt in generated_prompts:
                generated_prompt = generated_prompt.split("Prompt#:")[-1].strip()
                evolved_instructions.append(generated_prompt)

            if self.store_evolutions:
                instructions = [
                    instruction + [evolved_instruction]
                    for instruction, evolved_instruction in zip(
                        instructions, evolved_instructions
                    )
                ]
            else:
                instructions = [
                    [evolved_instruction]
                    for evolved_instruction in evolved_instructions
                ]

            self._logger.info(
                f"🔄 Ran iteration {iter_no} evolving {len(instructions)} instructions!"
            )

        return instructions

    def _generate_answers(
        self, evolved_instructions: List[List[str]]
    ) -> List[List[str]]:
        """Generates the answer for the instructions in `instructions`.

        Args:
            evolved_instructions: A list of lists where each item is a list with either the last
                evolved instruction if `store_evolutions=False` or all the evolved instructions
                if `store_evolutions=True`.

        Returns:
            A list of answers for each instruction.
        """
        formatted_instructions = [
            self.format_input(instruction)
            for instructions in evolved_instructions
            for instruction in instructions
        ]

        responses = self.llm.generate(
            formatted_instructions,
            num_generations=1,
            **self.llm.generation_kwargs,  # type: ignore
        )

        step = (
            self.num_evolutions
            if not self.include_original_instruction
            else self.num_evolutions + 1
        )
        return [
            flatten_responses(responses[i : i + step])
            for i in range(0, len(responses), step)
        ]

    @override
    def process(self, inputs: StepInput) -> "StepOutput":  # type: ignore
        """Processes the inputs of the task and generates the outputs using the LLM.

        Args:
            inputs: A list of Python dictionaries with the inputs of the task.

        Yields:
            A list of Python dictionaries with the outputs of the task.
        """

        evolved_instructions = self._evolve_instructions(inputs)

        if self.store_evolutions:
            # Remove the input instruction from the `evolved_instructions` list
            from_ = 1 if not self.include_original_instruction else 0
            evolved_instructions = [
                instruction[from_:] for instruction in evolved_instructions
            ]

        if not self.generate_answers:
            for input, instruction in zip(inputs, evolved_instructions):
                input.update(self.format_output(instruction))
            yield inputs

        self._logger.info(
            f"🎉 Finished evolving {len(evolved_instructions)} instructions!"
        )

        if self.generate_answers:
            self._logger.info(
                f"🧠 Generating answers for the {len(evolved_instructions)} evolved instructions!"
            )

            answers = self._generate_answers(evolved_instructions)

            self._logger.info(
                f"🎉 Finished generating answers for the {len(evolved_instructions)} evolved"
                " instructions!"
            )

            for idx, (input, instruction) in enumerate(
                zip(inputs, evolved_instructions)
            ):
                input.update(self.format_output(instruction, answers[idx]))
            yield inputs

inputs: List[str] property

The input for the task is the instruction.

mutation_templates_names: List[str] property

Returns the names i.e. keys of the provided mutation_templates.

outputs: List[str] property

The output for the task are the evolved_instruction/s, the answer if generate_answers=True and the model_name.

_apply_random_mutation(instruction)

Applies a random mutation from the ones provided as part of the mutation_templates enum, and returns the provided instruction within the mutation prompt.

Parameters:

Name Type Description Default
instruction str

The instruction to be included within the mutation prompt.

required

Returns:

Type Description
str

A random mutation prompt with the provided instruction.

Source code in src/distilabel/steps/tasks/evol_instruct/base.py
def _apply_random_mutation(self, instruction: str) -> str:
    """Applies a random mutation from the ones provided as part of the `mutation_templates`
    enum, and returns the provided instruction within the mutation prompt.

    Args:
        instruction: The instruction to be included within the mutation prompt.

    Returns:
        A random mutation prompt with the provided instruction.
    """
    mutation = np.random.choice(self.mutation_templates_names)
    return self.mutation_templates[mutation].replace("<PROMPT>", instruction)  # type: ignore

_evolve_instructions(inputs)

Evolves the instructions provided as part of the inputs of the task.

Parameters:

Name Type Description Default
inputs StepInput

A list of Python dictionaries with the inputs of the task.

required

Returns:

Type Description
List[List[str]]

A list where each item is a list with either the last evolved instruction if

List[List[str]]

store_evolutions=False or all the evolved instructions if store_evolutions=True.

Source code in src/distilabel/steps/tasks/evol_instruct/base.py
def _evolve_instructions(self, inputs: "StepInput") -> List[List[str]]:
    """Evolves the instructions provided as part of the inputs of the task.

    Args:
        inputs: A list of Python dictionaries with the inputs of the task.

    Returns:
        A list where each item is a list with either the last evolved instruction if
        `store_evolutions=False` or all the evolved instructions if `store_evolutions=True`.
    """

    instructions: List[List[str]] = [[input["instruction"]] for input in inputs]

    for iter_no in range(self.num_evolutions):
        formatted_prompts = []
        for instruction in instructions:
            formatted_prompts.append(self._apply_random_mutation(instruction[-1]))

        formatted_prompts = [
            self.format_input(prompt) for prompt in formatted_prompts
        ]
        generated_prompts = flatten_responses(
            self.llm.generate(
                formatted_prompts,
                **self.llm.generation_kwargs,  # type: ignore
            )
        )

        evolved_instructions = []
        for generated_prompt in generated_prompts:
            generated_prompt = generated_prompt.split("Prompt#:")[-1].strip()
            evolved_instructions.append(generated_prompt)

        if self.store_evolutions:
            instructions = [
                instruction + [evolved_instruction]
                for instruction, evolved_instruction in zip(
                    instructions, evolved_instructions
                )
            ]
        else:
            instructions = [
                [evolved_instruction]
                for evolved_instruction in evolved_instructions
            ]

        self._logger.info(
            f"🔄 Ran iteration {iter_no} evolving {len(instructions)} instructions!"
        )

    return instructions

_generate_answers(evolved_instructions)

Generates the answer for the instructions in instructions.

Parameters:

Name Type Description Default
evolved_instructions List[List[str]]

A list of lists where each item is a list with either the last evolved instruction if store_evolutions=False or all the evolved instructions if store_evolutions=True.

required

Returns:

Type Description
List[List[str]]

A list of answers for each instruction.

Source code in src/distilabel/steps/tasks/evol_instruct/base.py
def _generate_answers(
    self, evolved_instructions: List[List[str]]
) -> List[List[str]]:
    """Generates the answer for the instructions in `instructions`.

    Args:
        evolved_instructions: A list of lists where each item is a list with either the last
            evolved instruction if `store_evolutions=False` or all the evolved instructions
            if `store_evolutions=True`.

    Returns:
        A list of answers for each instruction.
    """
    formatted_instructions = [
        self.format_input(instruction)
        for instructions in evolved_instructions
        for instruction in instructions
    ]

    responses = self.llm.generate(
        formatted_instructions,
        num_generations=1,
        **self.llm.generation_kwargs,  # type: ignore
    )

    step = (
        self.num_evolutions
        if not self.include_original_instruction
        else self.num_evolutions + 1
    )
    return [
        flatten_responses(responses[i : i + step])
        for i in range(0, len(responses), step)
    ]

format_input(input)

The input is formatted as a ChatType assuming that the instruction is the first interaction from the user within a conversation. And the system_prompt is added as the first message if it exists.

Source code in src/distilabel/steps/tasks/evol_instruct/base.py
def format_input(self, input: str) -> ChatType:  # type: ignore
    """The input is formatted as a `ChatType` assuming that the instruction
    is the first interaction from the user within a conversation. And the
    `system_prompt` is added as the first message if it exists."""
    return [{"role": "user", "content": input}]

format_output(instructions, answers=None)

The output for the task is a dict with: evolved_instruction or evolved_instructions, depending whether the value is either False or True for store_evolutions, respectively; answer if generate_answers=True; and, finally, the model_name.

Parameters:

Name Type Description Default
instructions Union[str, List[str]]

The instructions to be included within the output.

required
answers Optional[List[str]]

The answers to be included within the output if generate_answers=True.

None

Returns:

Type Description
Dict[str, Any]

If store_evolutions=False and generate_answers=True return {"evolved_instruction": ..., "model_name": ..., "answer": ...};

Dict[str, Any]

if store_evolutions=True and generate_answers=True return {"evolved_instructions": ..., "model_name": ..., "answer": ...};

Dict[str, Any]

if store_evolutions=False and generate_answers=False return {"evolved_instruction": ..., "model_name": ...};

Dict[str, Any]

if store_evolutions=True and generate_answers=False return {"evolved_instructions": ..., "model_name": ...}.

Source code in src/distilabel/steps/tasks/evol_instruct/base.py
@override
def format_output(  # type: ignore
    self, instructions: Union[str, List[str]], answers: Optional[List[str]] = None
) -> Dict[str, Any]:  # type: ignore
    """The output for the task is a dict with: `evolved_instruction` or `evolved_instructions`,
    depending whether the value is either `False` or `True` for `store_evolutions`, respectively;
    `answer` if `generate_answers=True`; and, finally, the `model_name`.

    Args:
        instructions: The instructions to be included within the output.
        answers: The answers to be included within the output if `generate_answers=True`.

    Returns:
        If `store_evolutions=False` and `generate_answers=True` return {"evolved_instruction": ..., "model_name": ..., "answer": ...};
        if `store_evolutions=True` and `generate_answers=True` return {"evolved_instructions": ..., "model_name": ..., "answer": ...};
        if `store_evolutions=False` and `generate_answers=False` return {"evolved_instruction": ..., "model_name": ...};
        if `store_evolutions=True` and `generate_answers=False` return {"evolved_instructions": ..., "model_name": ...}.
    """
    _output = {}
    if not self.store_evolutions:
        _output["evolved_instruction"] = instructions[-1]
    else:
        _output["evolved_instructions"] = instructions

    if self.generate_answers and answers:
        if not self.store_evolutions:
            _output["answer"] = answers[-1]
        else:
            _output["answers"] = answers

    _output["model_name"] = self.llm.model_name
    return _output

process(inputs)

Processes the inputs of the task and generates the outputs using the LLM.

Parameters:

Name Type Description Default
inputs StepInput

A list of Python dictionaries with the inputs of the task.

required

Yields:

Type Description
StepOutput

A list of Python dictionaries with the outputs of the task.

Source code in src/distilabel/steps/tasks/evol_instruct/base.py
@override
def process(self, inputs: StepInput) -> "StepOutput":  # type: ignore
    """Processes the inputs of the task and generates the outputs using the LLM.

    Args:
        inputs: A list of Python dictionaries with the inputs of the task.

    Yields:
        A list of Python dictionaries with the outputs of the task.
    """

    evolved_instructions = self._evolve_instructions(inputs)

    if self.store_evolutions:
        # Remove the input instruction from the `evolved_instructions` list
        from_ = 1 if not self.include_original_instruction else 0
        evolved_instructions = [
            instruction[from_:] for instruction in evolved_instructions
        ]

    if not self.generate_answers:
        for input, instruction in zip(inputs, evolved_instructions):
            input.update(self.format_output(instruction))
        yield inputs

    self._logger.info(
        f"🎉 Finished evolving {len(evolved_instructions)} instructions!"
    )

    if self.generate_answers:
        self._logger.info(
            f"🧠 Generating answers for the {len(evolved_instructions)} evolved instructions!"
        )

        answers = self._generate_answers(evolved_instructions)

        self._logger.info(
            f"🎉 Finished generating answers for the {len(evolved_instructions)} evolved"
            " instructions!"
        )

        for idx, (input, instruction) in enumerate(
            zip(inputs, evolved_instructions)
        ):
            input.update(self.format_output(instruction, answers[idx]))
        yield inputs

EvolInstructGenerator

Bases: GeneratorTask

Generate evolved instructions using an LLM.

WizardLM: Empowering Large Language Models to Follow Complex Instructions

Attributes:

Name Type Description
num_instructions int

The number of instructions to be generated.

generate_answers bool

Whether to generate answers for the instructions or not. Defaults to False.

mutation_templates Dict[str, str]

The mutation templates to be used for the generation of the instructions.

min_length RuntimeParameter[int]

Defines the length (in bytes) that the generated instruction needs to be higher than, to be considered valid. Defaults to 512.

max_length RuntimeParameter[int]

Defines the length (in bytes) that the generated instruction needs to be lower than, to be considered valid. Defaults to 1024.

seed RuntimeParameter[int]

The seed to be set for numpy in order to randomly pick a mutation method. Defaults to 42.

Runtime parameters
  • min_length: Defines the length (in bytes) that the generated instruction needs to be higher than, to be considered valid.
  • max_length: Defines the length (in bytes) that the generated instruction needs to be lower than, to be considered valid.
  • seed: The seed to be set for numpy in order to randomly pick a mutation method.
Output columns
  • instruction (str): The generated instruction if generate_answers=False.
  • answer (str): The generated answer if generate_answers=True.
  • instructions (List[str]): The generated instructions if generate_answers=True.
  • model_name (str): The name of the LLM used to generate and evolve the instructions.
Categories
  • evol
  • instruction
  • generation
References

Examples:

Generate evolved instructions without initial instructions:

```python
from distilabel.steps.tasks import EvolInstructGenerator
from distilabel.llms.huggingface import InferenceEndpointsLLM

# Consider this as a placeholder for your actual LLM.
evol_instruct_generator = EvolInstructGenerator(
    llm=InferenceEndpointsLLM(
        model_id="mistralai/Mistral-7B-Instruct-v0.2",
    ),
    num_instructions=2,
)

evol_instruct_generator.load()

result = next(scorer.process())
# result
# [{'instruction': 'generated instruction', 'model_name': 'test'}]
```

Citations:

```
@misc{xu2023wizardlmempoweringlargelanguage,
    title={WizardLM: Empowering Large Language Models to Follow Complex Instructions},
    author={Can Xu and Qingfeng Sun and Kai Zheng and Xiubo Geng and Pu Zhao and Jiazhan Feng and Chongyang Tao and Daxin Jiang},
    year={2023},
    eprint={2304.12244},
    archivePrefix={arXiv},
    primaryClass={cs.CL},
    url={https://arxiv.org/abs/2304.12244},
}
```
Source code in src/distilabel/steps/tasks/evol_instruct/generator.py
class EvolInstructGenerator(GeneratorTask):
    """Generate evolved instructions using an `LLM`.

    WizardLM: Empowering Large Language Models to Follow Complex Instructions

    Attributes:
        num_instructions: The number of instructions to be generated.
        generate_answers: Whether to generate answers for the instructions or not. Defaults
            to `False`.
        mutation_templates: The mutation templates to be used for the generation of the
            instructions.
        min_length: Defines the length (in bytes) that the generated instruction needs to
            be higher than, to be considered valid. Defaults to `512`.
        max_length: Defines the length (in bytes) that the generated instruction needs to
            be lower than, to be considered valid. Defaults to `1024`.
        seed: The seed to be set for `numpy` in order to randomly pick a mutation method.
            Defaults to `42`.

    Runtime parameters:
        - `min_length`: Defines the length (in bytes) that the generated instruction needs
            to be higher than, to be considered valid.
        - `max_length`: Defines the length (in bytes) that the generated instruction needs
            to be lower than, to be considered valid.
        - `seed`: The seed to be set for `numpy` in order to randomly pick a mutation method.

    Output columns:
        - instruction (`str`): The generated instruction if `generate_answers=False`.
        - answer (`str`): The generated answer if `generate_answers=True`.
        - instructions (`List[str]`): The generated instructions if `generate_answers=True`.
        - model_name (`str`): The name of the LLM used to generate and evolve the instructions.

    Categories:
        - evol
        - instruction
        - generation

    References:
        - [WizardLM: Empowering Large Language Models to Follow Complex Instructions](https://arxiv.org/abs/2304.12244)
        - [GitHub: h2oai/h2o-wizardlm](https://github.com/h2oai/h2o-wizardlm)

    Examples:

        Generate evolved instructions without initial instructions:

        ```python
        from distilabel.steps.tasks import EvolInstructGenerator
        from distilabel.llms.huggingface import InferenceEndpointsLLM

        # Consider this as a placeholder for your actual LLM.
        evol_instruct_generator = EvolInstructGenerator(
            llm=InferenceEndpointsLLM(
                model_id="mistralai/Mistral-7B-Instruct-v0.2",
            ),
            num_instructions=2,
        )

        evol_instruct_generator.load()

        result = next(scorer.process())
        # result
        # [{'instruction': 'generated instruction', 'model_name': 'test'}]
        ```

    Citations:

        ```
        @misc{xu2023wizardlmempoweringlargelanguage,
            title={WizardLM: Empowering Large Language Models to Follow Complex Instructions},
            author={Can Xu and Qingfeng Sun and Kai Zheng and Xiubo Geng and Pu Zhao and Jiazhan Feng and Chongyang Tao and Daxin Jiang},
            year={2023},
            eprint={2304.12244},
            archivePrefix={arXiv},
            primaryClass={cs.CL},
            url={https://arxiv.org/abs/2304.12244},
        }
        ```
    """

    num_instructions: int
    generate_answers: bool = False
    mutation_templates: Dict[str, str] = GENERATION_MUTATION_TEMPLATES

    min_length: RuntimeParameter[int] = Field(
        default=512,
        description="Defines the length (in bytes) that the generated instruction needs to be higher than, to be considered valid.",
    )
    max_length: RuntimeParameter[int] = Field(
        default=1024,
        description="Defines the length (in bytes) that the generated instruction needs to be lower than, to be considered valid.",
    )

    seed: RuntimeParameter[int] = Field(
        default=42,
        description="As `numpy` is being used in order to randomly pick a mutation method, then is nice to seed a random seed.",
    )
    _seed_texts: Optional[List[str]] = PrivateAttr(default_factory=list)
    _prompts: Optional[List[str]] = PrivateAttr(default_factory=list)

    def _generate_seed_texts(self) -> List[str]:
        """Generates a list of seed texts to be used as part of the starting prompts for the task.

        It will use the `FRESH_START` mutation template, as it needs to generate text from scratch; and
        a list of English words will be used to generate the seed texts that will be provided to the
        mutation method and included within the prompt.

        Returns:
            A list of seed texts to be used as part of the starting prompts for the task.
        """
        seed_texts = []
        for _ in range(self.num_instructions * 10):
            num_words = np.random.choice([1, 2, 3, 4])
            seed_texts.append(
                self.mutation_templates["FRESH_START"].replace(  # type: ignore
                    "<PROMPT>",
                    ", ".join(
                        [
                            np.random.choice(self._english_nouns).strip()
                            for _ in range(num_words)
                        ]
                    ),
                )
            )
        return seed_texts

    @override
    def model_post_init(self, __context: Any) -> None:
        """Override this method to perform additional initialization after `__init__` and `model_construct`.
        This is useful if you want to do some validation that requires the entire model to be initialized.
        """
        super().model_post_init(__context)

        np.random.seed(self.seed)

        self._seed_texts = self._generate_seed_texts()
        self._prompts = [
            np.random.choice(self._seed_texts) for _ in range(self.num_instructions)
        ]

    @cached_property
    def _english_nouns(self) -> List[str]:
        """A list of English nouns to be used as part of the starting prompts for the task.

        References:
            - https://github.com/h2oai/h2o-wizardlm
        """
        _path = str(
            importlib_resources.files("distilabel")
            / "steps/tasks/evol_instruct/english_nouns.txt"
        )
        with open(_path, mode="r") as f:
            return [line.strip() for line in f.readlines()]

    @property
    def outputs(self) -> List[str]:
        """The output for the task are the `instruction`, the `answer` if `generate_answers=True`
        and the `model_name`."""
        _outputs = ["instruction", "model_name"]
        if self.generate_answers:
            _outputs.append("answer")
        return _outputs

    def format_output(  # type: ignore
        self, instruction: str, answer: Optional[str] = None
    ) -> Dict[str, Any]:
        """The output for the task is a dict with: `instruction`; `answer` if `generate_answers=True`;
        and, finally, the `model_name`.

        Args:
            instruction: The instruction to be included within the output.
            answer: The answer to be included within the output if `generate_answers=True`.

        Returns:
            If `generate_answers=True` return {"instruction": ..., "answer": ..., "model_name": ...};
            if `generate_answers=False` return {"instruction": ..., "model_name": ...};
        """
        _output = {
            "instruction": instruction,
            "model_name": self.llm.model_name,
        }
        if self.generate_answers and answer is not None:
            _output["answer"] = answer
        return _output

    @property
    def mutation_templates_names(self) -> List[str]:
        """Returns the names i.e. keys of the provided `mutation_templates`."""
        return list(self.mutation_templates.keys())

    def _apply_random_mutation(self, iter_no: int) -> List["ChatType"]:
        """Applies a random mutation from the ones provided as part of the `mutation_templates`
        enum, and returns the provided instruction within the mutation prompt.

        Args:
            iter_no: The iteration number to be used to check whether the iteration is the
                first one i.e. FRESH_START, or not.

        Returns:
            A random mutation prompt with the provided instruction formatted as an OpenAI conversation.
        """
        prompts = []
        for idx in range(self.num_instructions):
            if (
                iter_no == 0
                or "Write one question or request containing" in self._prompts[idx]  # type: ignore
            ):
                mutation = "FRESH_START"
            else:
                mutation = np.random.choice(self.mutation_templates_names)
                if mutation == "FRESH_START":
                    self._prompts[idx] = np.random.choice(self._seed_texts)  # type: ignore

            prompt_with_template = (
                self.mutation_templates[mutation].replace(  # type: ignore
                    "<PROMPT>",
                    self._prompts[idx],  # type: ignore
                )  # type: ignore
                if iter_no != 0
                else self._prompts[idx]  # type: ignore
            )
            prompts.append([{"role": "user", "content": prompt_with_template}])
        return prompts

    def _generate_answers(self, instructions: List[List[str]]) -> List[str]:
        """Generates the answer for the last instruction in `instructions`.

        Args:
            instructions: A list of lists where each item is a list with either the last
                evolved instruction if `store_evolutions=False` or all the evolved instructions
                if `store_evolutions=True`.

        Returns:
            A list of answers for the last instruction in `instructions`.
        """
        # TODO: update to generate answers for all the instructions
        _formatted_instructions = [
            [{"role": "user", "content": instruction[-1]}]
            for instruction in instructions
        ]
        responses = self.llm.generate(
            _formatted_instructions,
            **self.llm.generation_kwargs,  # type: ignore
        )
        return flatten_responses(responses)

    @override
    def process(self, offset: int = 0) -> "GeneratorStepOutput":  # type: ignore
        """Processes the inputs of the task and generates the outputs using the LLM.

        Args:
            offset: The offset to start the generation from. Defaults to 0.

        Yields:
            A list of Python dictionaries with the outputs of the task, and a boolean
            flag indicating whether the task has finished or not i.e. is the last batch.
        """
        instructions = []
        mutation_no = 0

        # TODO: update to take into account `offset`
        iter_no = 0
        while len(instructions) < self.num_instructions:
            prompts = self._apply_random_mutation(iter_no=iter_no)

            generated_prompts = flatten_responses(
                self.llm.generate(prompts, **self.llm.generation_kwargs)  # type: ignore
            )
            for idx, generated_prompt in enumerate(generated_prompts):
                generated_prompt = generated_prompt.split("Prompt#:")[-1].strip()
                if self.max_length >= len(generated_prompt) >= self.min_length:  # type: ignore
                    instructions.append(generated_prompt)
                    self._prompts[idx] = np.random.choice(self._seed_texts)  # type: ignore
                else:
                    self._prompts[idx] = generated_prompt  # type: ignore

            self._logger.info(
                f"🔄 Ran iteration {iter_no} with {len(instructions)} instructions already evolved!"
            )
            iter_no += 1

            if len(instructions) > self.num_instructions:
                instructions = instructions[: self.num_instructions]
            if len(instructions) > mutation_no:
                mutation_no = len(instructions) - mutation_no

            if not self.generate_answers and len(instructions[-mutation_no:]) > 0:
                yield (
                    [
                        self.format_output(mutated_instruction)
                        for mutated_instruction in instructions[-mutation_no:]
                    ],
                    len(instructions) >= self.num_instructions,
                )

        self._logger.info(f"🎉 Finished evolving {len(instructions)} instructions!")

        if self.generate_answers:
            self._logger.info(
                f"🧠 Generating answers for the {len(instructions)} evolved instructions!"
            )

            answers = self._generate_answers(instructions)

            self._logger.info(
                f"🎉 Finished generating answers for the {len(instructions)} evolved instructions!"
            )

            yield (
                [
                    self.format_output(instruction, answer)
                    for instruction, answer in zip(instructions, answers)
                ],
                True,
            )

_english_nouns: List[str] cached property

A list of English nouns to be used as part of the starting prompts for the task.

References
  • https://github.com/h2oai/h2o-wizardlm

mutation_templates_names: List[str] property

Returns the names i.e. keys of the provided mutation_templates.

outputs: List[str] property

The output for the task are the instruction, the answer if generate_answers=True and the model_name.

_apply_random_mutation(iter_no)

Applies a random mutation from the ones provided as part of the mutation_templates enum, and returns the provided instruction within the mutation prompt.

Parameters:

Name Type Description Default
iter_no int

The iteration number to be used to check whether the iteration is the first one i.e. FRESH_START, or not.

required

Returns:

Type Description
List[ChatType]

A random mutation prompt with the provided instruction formatted as an OpenAI conversation.

Source code in src/distilabel/steps/tasks/evol_instruct/generator.py
def _apply_random_mutation(self, iter_no: int) -> List["ChatType"]:
    """Applies a random mutation from the ones provided as part of the `mutation_templates`
    enum, and returns the provided instruction within the mutation prompt.

    Args:
        iter_no: The iteration number to be used to check whether the iteration is the
            first one i.e. FRESH_START, or not.

    Returns:
        A random mutation prompt with the provided instruction formatted as an OpenAI conversation.
    """
    prompts = []
    for idx in range(self.num_instructions):
        if (
            iter_no == 0
            or "Write one question or request containing" in self._prompts[idx]  # type: ignore
        ):
            mutation = "FRESH_START"
        else:
            mutation = np.random.choice(self.mutation_templates_names)
            if mutation == "FRESH_START":
                self._prompts[idx] = np.random.choice(self._seed_texts)  # type: ignore

        prompt_with_template = (
            self.mutation_templates[mutation].replace(  # type: ignore
                "<PROMPT>",
                self._prompts[idx],  # type: ignore
            )  # type: ignore
            if iter_no != 0
            else self._prompts[idx]  # type: ignore
        )
        prompts.append([{"role": "user", "content": prompt_with_template}])
    return prompts

_generate_answers(instructions)

Generates the answer for the last instruction in instructions.

Parameters:

Name Type Description Default
instructions List[List[str]]

A list of lists where each item is a list with either the last evolved instruction if store_evolutions=False or all the evolved instructions if store_evolutions=True.

required

Returns:

Type Description
List[str]

A list of answers for the last instruction in instructions.

Source code in src/distilabel/steps/tasks/evol_instruct/generator.py
def _generate_answers(self, instructions: List[List[str]]) -> List[str]:
    """Generates the answer for the last instruction in `instructions`.

    Args:
        instructions: A list of lists where each item is a list with either the last
            evolved instruction if `store_evolutions=False` or all the evolved instructions
            if `store_evolutions=True`.

    Returns:
        A list of answers for the last instruction in `instructions`.
    """
    # TODO: update to generate answers for all the instructions
    _formatted_instructions = [
        [{"role": "user", "content": instruction[-1]}]
        for instruction in instructions
    ]
    responses = self.llm.generate(
        _formatted_instructions,
        **self.llm.generation_kwargs,  # type: ignore
    )
    return flatten_responses(responses)

_generate_seed_texts()

Generates a list of seed texts to be used as part of the starting prompts for the task.

It will use the FRESH_START mutation template, as it needs to generate text from scratch; and a list of English words will be used to generate the seed texts that will be provided to the mutation method and included within the prompt.

Returns:

Type Description
List[str]

A list of seed texts to be used as part of the starting prompts for the task.

Source code in src/distilabel/steps/tasks/evol_instruct/generator.py
def _generate_seed_texts(self) -> List[str]:
    """Generates a list of seed texts to be used as part of the starting prompts for the task.

    It will use the `FRESH_START` mutation template, as it needs to generate text from scratch; and
    a list of English words will be used to generate the seed texts that will be provided to the
    mutation method and included within the prompt.

    Returns:
        A list of seed texts to be used as part of the starting prompts for the task.
    """
    seed_texts = []
    for _ in range(self.num_instructions * 10):
        num_words = np.random.choice([1, 2, 3, 4])
        seed_texts.append(
            self.mutation_templates["FRESH_START"].replace(  # type: ignore
                "<PROMPT>",
                ", ".join(
                    [
                        np.random.choice(self._english_nouns).strip()
                        for _ in range(num_words)
                    ]
                ),
            )
        )
    return seed_texts

format_output(instruction, answer=None)

The output for the task is a dict with: instruction; answer if generate_answers=True; and, finally, the model_name.

Parameters:

Name Type Description Default
instruction str

The instruction to be included within the output.

required
answer Optional[str]

The answer to be included within the output if generate_answers=True.

None

Returns:

Type Description
Dict[str, Any]

If generate_answers=True return {"instruction": ..., "answer": ..., "model_name": ...};

Dict[str, Any]

if generate_answers=False return {"instruction": ..., "model_name": ...};

Source code in src/distilabel/steps/tasks/evol_instruct/generator.py
def format_output(  # type: ignore
    self, instruction: str, answer: Optional[str] = None
) -> Dict[str, Any]:
    """The output for the task is a dict with: `instruction`; `answer` if `generate_answers=True`;
    and, finally, the `model_name`.

    Args:
        instruction: The instruction to be included within the output.
        answer: The answer to be included within the output if `generate_answers=True`.

    Returns:
        If `generate_answers=True` return {"instruction": ..., "answer": ..., "model_name": ...};
        if `generate_answers=False` return {"instruction": ..., "model_name": ...};
    """
    _output = {
        "instruction": instruction,
        "model_name": self.llm.model_name,
    }
    if self.generate_answers and answer is not None:
        _output["answer"] = answer
    return _output

model_post_init(__context)

Override this method to perform additional initialization after __init__ and model_construct. This is useful if you want to do some validation that requires the entire model to be initialized.

Source code in src/distilabel/steps/tasks/evol_instruct/generator.py
@override
def model_post_init(self, __context: Any) -> None:
    """Override this method to perform additional initialization after `__init__` and `model_construct`.
    This is useful if you want to do some validation that requires the entire model to be initialized.
    """
    super().model_post_init(__context)

    np.random.seed(self.seed)

    self._seed_texts = self._generate_seed_texts()
    self._prompts = [
        np.random.choice(self._seed_texts) for _ in range(self.num_instructions)
    ]

process(offset=0)

Processes the inputs of the task and generates the outputs using the LLM.

Parameters:

Name Type Description Default
offset int

The offset to start the generation from. Defaults to 0.

0

Yields:

Type Description
GeneratorStepOutput

A list of Python dictionaries with the outputs of the task, and a boolean

GeneratorStepOutput

flag indicating whether the task has finished or not i.e. is the last batch.

Source code in src/distilabel/steps/tasks/evol_instruct/generator.py
@override
def process(self, offset: int = 0) -> "GeneratorStepOutput":  # type: ignore
    """Processes the inputs of the task and generates the outputs using the LLM.

    Args:
        offset: The offset to start the generation from. Defaults to 0.

    Yields:
        A list of Python dictionaries with the outputs of the task, and a boolean
        flag indicating whether the task has finished or not i.e. is the last batch.
    """
    instructions = []
    mutation_no = 0

    # TODO: update to take into account `offset`
    iter_no = 0
    while len(instructions) < self.num_instructions:
        prompts = self._apply_random_mutation(iter_no=iter_no)

        generated_prompts = flatten_responses(
            self.llm.generate(prompts, **self.llm.generation_kwargs)  # type: ignore
        )
        for idx, generated_prompt in enumerate(generated_prompts):
            generated_prompt = generated_prompt.split("Prompt#:")[-1].strip()
            if self.max_length >= len(generated_prompt) >= self.min_length:  # type: ignore
                instructions.append(generated_prompt)
                self._prompts[idx] = np.random.choice(self._seed_texts)  # type: ignore
            else:
                self._prompts[idx] = generated_prompt  # type: ignore

        self._logger.info(
            f"🔄 Ran iteration {iter_no} with {len(instructions)} instructions already evolved!"
        )
        iter_no += 1

        if len(instructions) > self.num_instructions:
            instructions = instructions[: self.num_instructions]
        if len(instructions) > mutation_no:
            mutation_no = len(instructions) - mutation_no

        if not self.generate_answers and len(instructions[-mutation_no:]) > 0:
            yield (
                [
                    self.format_output(mutated_instruction)
                    for mutated_instruction in instructions[-mutation_no:]
                ],
                len(instructions) >= self.num_instructions,
            )

    self._logger.info(f"🎉 Finished evolving {len(instructions)} instructions!")

    if self.generate_answers:
        self._logger.info(
            f"🧠 Generating answers for the {len(instructions)} evolved instructions!"
        )

        answers = self._generate_answers(instructions)

        self._logger.info(
            f"🎉 Finished generating answers for the {len(instructions)} evolved instructions!"
        )

        yield (
            [
                self.format_output(instruction, answer)
                for instruction, answer in zip(instructions, answers)
            ],
            True,
        )

EvolQuality

Bases: Task

Evolve the quality of the responses using an LLM.

EvolQuality task is used to evolve the quality of the responses given a prompt, by generating a new response with a language model. This step implements the evolution quality task from the paper 'What Makes Good Data for Alignment? A Comprehensive Study of Automatic Data Selection in Instruction Tuning'.

Attributes:

Name Type Description
num_evolutions int

The number of evolutions to be performed on the responses.

store_evolutions bool

Whether to store all the evolved responses or just the last one. Defaults to False.

include_original_response bool

Whether to include the original response within the evolved responses. Defaults to False.

mutation_templates Dict[str, str]

The mutation templates to be used to evolve the responses.

seed RuntimeParameter[int]

The seed to be set for numpy in order to randomly pick a mutation method. Defaults to 42.

Runtime parameters
  • seed: The seed to be set for numpy in order to randomly pick a mutation method.
Input columns
  • instruction (str): The instruction that was used to generate the responses.
  • response (str): The responses to be rewritten.
Output columns
  • evolved_response (str): The evolved response if store_evolutions=False.
  • evolved_responses (List[str]): The evolved responses if store_evolutions=True.
  • model_name (str): The name of the LLM used to evolve the responses.
Categories
  • evol
  • response
  • deita
References

Examples:

Evolve the quality of the responses given a prompt:

```python
from distilabel.steps.tasks import EvolQuality
from distilabel.llms.huggingface import InferenceEndpointsLLM

# Consider this as a placeholder for your actual LLM.
evol_quality = EvolQuality(
    llm=InferenceEndpointsLLM(
        model_id="mistralai/Mistral-7B-Instruct-v0.2",
    ),
    num_evolutions=2,
)

evol_quality.load()

result = next(
    evol_quality.process(
        [
            {"instruction": "common instruction", "response": "a response"},
        ]
    )
)
# result
# [
#     {
#         'instruction': 'common instruction',
#         'response': 'a response',
#         'evolved_response': 'evolved response',
#         'model_name': '"mistralai/Mistral-7B-Instruct-v0.2"'
#     }
# ]
```

Citations:

```
@misc{liu2024makesgooddataalignment,
    title={What Makes Good Data for Alignment? A Comprehensive Study of Automatic Data Selection in Instruction Tuning},
    author={Wei Liu and Weihao Zeng and Keqing He and Yong Jiang and Junxian He},
    year={2024},
    eprint={2312.15685},
    archivePrefix={arXiv},
    primaryClass={cs.CL},
    url={https://arxiv.org/abs/2312.15685},
}
```
Source code in src/distilabel/steps/tasks/evol_quality/base.py
class EvolQuality(Task):
    """Evolve the quality of the responses using an `LLM`.

    `EvolQuality` task is used to evolve the quality of the responses given a prompt,
    by generating a new response with a language model. This step implements the evolution
    quality task from the paper 'What Makes Good Data for Alignment? A Comprehensive Study of
    Automatic Data Selection in Instruction Tuning'.

    Attributes:
        num_evolutions: The number of evolutions to be performed on the responses.
        store_evolutions: Whether to store all the evolved responses or just the last one.
            Defaults to `False`.
        include_original_response: Whether to include the original response within the evolved
            responses. Defaults to `False`.
        mutation_templates: The mutation templates to be used to evolve the responses.
        seed: The seed to be set for `numpy` in order to randomly pick a mutation method.
            Defaults to `42`.

    Runtime parameters:
        - `seed`: The seed to be set for `numpy` in order to randomly pick a mutation method.

    Input columns:
        - instruction (`str`): The instruction that was used to generate the `responses`.
        - response (`str`): The responses to be rewritten.

    Output columns:
        - evolved_response (`str`): The evolved response if `store_evolutions=False`.
        - evolved_responses (`List[str]`): The evolved responses if `store_evolutions=True`.
        - model_name (`str`): The name of the LLM used to evolve the responses.

    Categories:
        - evol
        - response
        - deita

    References:
        - [`What Makes Good Data for Alignment? A Comprehensive Study of Automatic Data Selection in Instruction Tuning`](https://arxiv.org/abs/2312.15685)

    Examples:

        Evolve the quality of the responses given a prompt:

        ```python
        from distilabel.steps.tasks import EvolQuality
        from distilabel.llms.huggingface import InferenceEndpointsLLM

        # Consider this as a placeholder for your actual LLM.
        evol_quality = EvolQuality(
            llm=InferenceEndpointsLLM(
                model_id="mistralai/Mistral-7B-Instruct-v0.2",
            ),
            num_evolutions=2,
        )

        evol_quality.load()

        result = next(
            evol_quality.process(
                [
                    {"instruction": "common instruction", "response": "a response"},
                ]
            )
        )
        # result
        # [
        #     {
        #         'instruction': 'common instruction',
        #         'response': 'a response',
        #         'evolved_response': 'evolved response',
        #         'model_name': '"mistralai/Mistral-7B-Instruct-v0.2"'
        #     }
        # ]
        ```

    Citations:

        ```
        @misc{liu2024makesgooddataalignment,
            title={What Makes Good Data for Alignment? A Comprehensive Study of Automatic Data Selection in Instruction Tuning},
            author={Wei Liu and Weihao Zeng and Keqing He and Yong Jiang and Junxian He},
            year={2024},
            eprint={2312.15685},
            archivePrefix={arXiv},
            primaryClass={cs.CL},
            url={https://arxiv.org/abs/2312.15685},
        }
        ```
    """

    num_evolutions: int
    store_evolutions: bool = False
    include_original_response: bool = False
    mutation_templates: Dict[str, str] = MUTATION_TEMPLATES

    seed: RuntimeParameter[int] = Field(
        default=42,
        description="As `numpy` is being used in order to randomly pick a mutation method, then is nice to set a random seed.",
    )

    @override
    def model_post_init(self, __context: Any) -> None:
        """Override this method to perform additional initialization after `__init__` and `model_construct`.
        This is useful if you want to do some validation that requires the entire model to be initialized.
        """
        super().model_post_init(__context)

    @property
    def inputs(self) -> List[str]:
        """The input for the task are the `instruction` and `response`."""
        return ["instruction", "response"]

    def format_input(self, input: str) -> ChatType:  # type: ignore
        """The input is formatted as a `ChatType` assuming that the instruction
        is the first interaction from the user within a conversation. And the
        `system_prompt` is added as the first message if it exists."""
        return [{"role": "user", "content": input}]

    @property
    def outputs(self) -> List[str]:
        """The output for the task are the `evolved_response/s` and the `model_name`."""
        # TODO: having to define a `model_name` column every time as the `Task.outputs` is not ideal,
        # this could be handled always and the value could be included within the DAG validation when
        # a `Task` is used, since all the `Task` subclasses will have an `llm` with a `model_name` attr.
        _outputs = [
            ("evolved_response" if not self.store_evolutions else "evolved_responses"),
            "model_name",
        ]

        return _outputs

    def format_output(self, responses: Union[str, List[str]]) -> Dict[str, Any]:  # type: ignore
        """The output for the task is a dict with: `evolved_response` or `evolved_responses`,
        depending whether the value is either `False` or `True` for `store_evolutions`, respectively;
        and, finally, the `model_name`.

        Args:
            responses: The responses to be included within the output.

        Returns:
            if `store_evolutions=False` return {"evolved_response": ..., "model_name": ...};
            if `store_evolutions=True` return {"evolved_responses": ..., "model_name": ...}.
        """
        _output = {}

        if not self.store_evolutions:
            _output["evolved_response"] = responses[-1]
        else:
            _output["evolved_responses"] = responses

        _output["model_name"] = self.llm.model_name
        return _output

    @property
    def mutation_templates_names(self) -> List[str]:
        """Returns the names i.e. keys of the provided `mutation_templates` enum."""
        return list(self.mutation_templates.keys())

    def _apply_random_mutation(self, instruction: str, response: str) -> str:
        """Applies a random mutation from the ones provided as part of the `mutation_templates`
        enum, and returns the provided instruction within the mutation prompt.

        Args:
            instruction: The instruction to be included within the mutation prompt.

        Returns:
            A random mutation prompt with the provided instruction.
        """
        mutation = np.random.choice(self.mutation_templates_names)
        return (
            self.mutation_templates[mutation]
            .replace("<PROMPT>", instruction)
            .replace("<RESPONSE>", response)
        )

    def _evolve_reponses(self, inputs: "StepInput") -> List[List[str]]:
        """Evolves the instructions provided as part of the inputs of the task.

        Args:
            inputs: A list of Python dictionaries with the inputs of the task.

        Returns:
            A list where each item is a list with either the last evolved instruction if
            `store_evolutions=False` or all the evolved instructions if `store_evolutions=True`.
        """
        np.random.seed(self.seed)
        instructions: List[List[str]] = [[input["instruction"]] for input in inputs]
        responses: List[List[str]] = [[input["response"]] for input in inputs]

        for iter_no in range(self.num_evolutions):
            formatted_prompts = []
            for instruction, response in zip(instructions, responses):
                formatted_prompts.append(
                    self._apply_random_mutation(instruction[-1], response[-1])
                )

            formatted_prompts = [
                self.format_input(prompt) for prompt in formatted_prompts
            ]

            generated_responses = self.llm.generate(
                formatted_prompts,
                **self.llm.generation_kwargs,  # type: ignore
            )

            if self.store_evolutions:
                responses = [
                    response + [evolved_response[0]]
                    for response, evolved_response in zip(
                        responses, generated_responses
                    )
                ]
            else:
                responses = [
                    [evolved_response[0]] for evolved_response in generated_responses
                ]

            self._logger.info(
                f"🔄 Ran iteration {iter_no} evolving {len(responses)} responses!"
            )

        return responses

    @override
    def process(self, inputs: StepInput) -> "StepOutput":  # type: ignore
        """Processes the inputs of the task and generates the outputs using the LLM.

        Args:
            inputs: A list of Python dictionaries with the inputs of the task.

        Returns:
            A list of Python dictionaries with the outputs of the task.
        """

        responses = self._evolve_reponses(inputs)

        if self.store_evolutions:
            # Remove the input instruction from the `evolved_responses` list
            from_ = 1 if not self.include_original_response else 0
            responses = [response[from_:] for response in responses]

        for input, response in zip(inputs, responses):
            input.update(self.format_output(response))
        yield inputs

        self._logger.info(f"🎉 Finished evolving {len(responses)} instructions!")

inputs: List[str] property

The input for the task are the instruction and response.

mutation_templates_names: List[str] property

Returns the names i.e. keys of the provided mutation_templates enum.

outputs: List[str] property

The output for the task are the evolved_response/s and the model_name.

_apply_random_mutation(instruction, response)

Applies a random mutation from the ones provided as part of the mutation_templates enum, and returns the provided instruction within the mutation prompt.

Parameters:

Name Type Description Default
instruction str

The instruction to be included within the mutation prompt.

required

Returns:

Type Description
str

A random mutation prompt with the provided instruction.

Source code in src/distilabel/steps/tasks/evol_quality/base.py
def _apply_random_mutation(self, instruction: str, response: str) -> str:
    """Applies a random mutation from the ones provided as part of the `mutation_templates`
    enum, and returns the provided instruction within the mutation prompt.

    Args:
        instruction: The instruction to be included within the mutation prompt.

    Returns:
        A random mutation prompt with the provided instruction.
    """
    mutation = np.random.choice(self.mutation_templates_names)
    return (
        self.mutation_templates[mutation]
        .replace("<PROMPT>", instruction)
        .replace("<RESPONSE>", response)
    )

_evolve_reponses(inputs)

Evolves the instructions provided as part of the inputs of the task.

Parameters:

Name Type Description Default
inputs StepInput

A list of Python dictionaries with the inputs of the task.

required

Returns:

Type Description
List[List[str]]

A list where each item is a list with either the last evolved instruction if

List[List[str]]

store_evolutions=False or all the evolved instructions if store_evolutions=True.

Source code in src/distilabel/steps/tasks/evol_quality/base.py
def _evolve_reponses(self, inputs: "StepInput") -> List[List[str]]:
    """Evolves the instructions provided as part of the inputs of the task.

    Args:
        inputs: A list of Python dictionaries with the inputs of the task.

    Returns:
        A list where each item is a list with either the last evolved instruction if
        `store_evolutions=False` or all the evolved instructions if `store_evolutions=True`.
    """
    np.random.seed(self.seed)
    instructions: List[List[str]] = [[input["instruction"]] for input in inputs]
    responses: List[List[str]] = [[input["response"]] for input in inputs]

    for iter_no in range(self.num_evolutions):
        formatted_prompts = []
        for instruction, response in zip(instructions, responses):
            formatted_prompts.append(
                self._apply_random_mutation(instruction[-1], response[-1])
            )

        formatted_prompts = [
            self.format_input(prompt) for prompt in formatted_prompts
        ]

        generated_responses = self.llm.generate(
            formatted_prompts,
            **self.llm.generation_kwargs,  # type: ignore
        )

        if self.store_evolutions:
            responses = [
                response + [evolved_response[0]]
                for response, evolved_response in zip(
                    responses, generated_responses
                )
            ]
        else:
            responses = [
                [evolved_response[0]] for evolved_response in generated_responses
            ]

        self._logger.info(
            f"🔄 Ran iteration {iter_no} evolving {len(responses)} responses!"
        )

    return responses

format_input(input)

The input is formatted as a ChatType assuming that the instruction is the first interaction from the user within a conversation. And the system_prompt is added as the first message if it exists.

Source code in src/distilabel/steps/tasks/evol_quality/base.py
def format_input(self, input: str) -> ChatType:  # type: ignore
    """The input is formatted as a `ChatType` assuming that the instruction
    is the first interaction from the user within a conversation. And the
    `system_prompt` is added as the first message if it exists."""
    return [{"role": "user", "content": input}]

format_output(responses)

The output for the task is a dict with: evolved_response or evolved_responses, depending whether the value is either False or True for store_evolutions, respectively; and, finally, the model_name.

Parameters:

Name Type Description Default
responses Union[str, List[str]]

The responses to be included within the output.

required

Returns:

Type Description
Dict[str, Any]

if store_evolutions=False return {"evolved_response": ..., "model_name": ...};

Dict[str, Any]

if store_evolutions=True return {"evolved_responses": ..., "model_name": ...}.

Source code in src/distilabel/steps/tasks/evol_quality/base.py
def format_output(self, responses: Union[str, List[str]]) -> Dict[str, Any]:  # type: ignore
    """The output for the task is a dict with: `evolved_response` or `evolved_responses`,
    depending whether the value is either `False` or `True` for `store_evolutions`, respectively;
    and, finally, the `model_name`.

    Args:
        responses: The responses to be included within the output.

    Returns:
        if `store_evolutions=False` return {"evolved_response": ..., "model_name": ...};
        if `store_evolutions=True` return {"evolved_responses": ..., "model_name": ...}.
    """
    _output = {}

    if not self.store_evolutions:
        _output["evolved_response"] = responses[-1]
    else:
        _output["evolved_responses"] = responses

    _output["model_name"] = self.llm.model_name
    return _output

model_post_init(__context)

Override this method to perform additional initialization after __init__ and model_construct. This is useful if you want to do some validation that requires the entire model to be initialized.

Source code in src/distilabel/steps/tasks/evol_quality/base.py
@override
def model_post_init(self, __context: Any) -> None:
    """Override this method to perform additional initialization after `__init__` and `model_construct`.
    This is useful if you want to do some validation that requires the entire model to be initialized.
    """
    super().model_post_init(__context)

process(inputs)

Processes the inputs of the task and generates the outputs using the LLM.

Parameters:

Name Type Description Default
inputs StepInput

A list of Python dictionaries with the inputs of the task.

required

Returns:

Type Description
StepOutput

A list of Python dictionaries with the outputs of the task.

Source code in src/distilabel/steps/tasks/evol_quality/base.py
@override
def process(self, inputs: StepInput) -> "StepOutput":  # type: ignore
    """Processes the inputs of the task and generates the outputs using the LLM.

    Args:
        inputs: A list of Python dictionaries with the inputs of the task.

    Returns:
        A list of Python dictionaries with the outputs of the task.
    """

    responses = self._evolve_reponses(inputs)

    if self.store_evolutions:
        # Remove the input instruction from the `evolved_responses` list
        from_ = 1 if not self.include_original_response else 0
        responses = [response[from_:] for response in responses]

    for input, response in zip(inputs, responses):
        input.update(self.format_output(response))
    yield inputs

    self._logger.info(f"🎉 Finished evolving {len(responses)} instructions!")

GenerateEmbeddings

Bases: Step

Generate embeddings using the last hidden state of an LLM.

Generate embeddings for a text input using the last hidden state of an LLM, as described in the paper 'What Makes Good Data for Alignment? A Comprehensive Study of Automatic Data Selection in Instruction Tuning'.

Attributes:

Name Type Description
llm LLM

The LLM to use to generate the embeddings.

Input columns
  • text (str, List[Dict[str, str]]): The input text or conversation to generate embeddings for.
Output columns
  • embedding (List[float]): The embedding of the input text or conversation.
  • model_name (str): The model name used to generate the embeddings.
Categories
  • embedding
  • llm
References

Examples:

Rank LLM candidates:

```python
from distilabel.steps.tasks import GenerateEmbeddings
from distilabel.llms.huggingface import TransformersLLM

# Consider this as a placeholder for your actual LLM.
embedder = GenerateEmbeddings(
    llm=TransformersLLM(
        model="TaylorAI/bge-micro-v2",
        model_kwargs={"is_decoder": True},
        cuda_devices=[],
    )
)
embedder.load()

result = next(
    embedder.process(
        [
            {"text": "Hello, how are you?"},
        ]
    )
)
```

Citations:

```
@misc{liu2024makesgooddataalignment,
    title={What Makes Good Data for Alignment? A Comprehensive Study of Automatic Data Selection in Instruction Tuning},
    author={Wei Liu and Weihao Zeng and Keqing He and Yong Jiang and Junxian He},
    year={2024},
    eprint={2312.15685},
    archivePrefix={arXiv},
    primaryClass={cs.CL},
    url={https://arxiv.org/abs/2312.15685},
}
```
Source code in src/distilabel/steps/tasks/generate_embeddings.py
class GenerateEmbeddings(Step):
    """Generate embeddings using the last hidden state of an `LLM`.

    Generate embeddings for a text input using the last hidden state of an `LLM`, as
    described in the paper 'What Makes Good Data for Alignment? A Comprehensive Study of
    Automatic Data Selection in Instruction Tuning'.

    Attributes:
        llm: The `LLM` to use to generate the embeddings.

    Input columns:
        - text (`str`, `List[Dict[str, str]]`): The input text or conversation to generate
            embeddings for.

    Output columns:
        - embedding (`List[float]`): The embedding of the input text or conversation.
        - model_name (`str`): The model name used to generate the embeddings.

    Categories:
        - embedding
        - llm

    References:
        - [What Makes Good Data for Alignment? A Comprehensive Study of Automatic Data Selection in Instruction Tuning](https://arxiv.org/abs/2312.15685)

    Examples:

        Rank LLM candidates:

        ```python
        from distilabel.steps.tasks import GenerateEmbeddings
        from distilabel.llms.huggingface import TransformersLLM

        # Consider this as a placeholder for your actual LLM.
        embedder = GenerateEmbeddings(
            llm=TransformersLLM(
                model="TaylorAI/bge-micro-v2",
                model_kwargs={"is_decoder": True},
                cuda_devices=[],
            )
        )
        embedder.load()

        result = next(
            embedder.process(
                [
                    {"text": "Hello, how are you?"},
                ]
            )
        )
        ```

    Citations:

        ```
        @misc{liu2024makesgooddataalignment,
            title={What Makes Good Data for Alignment? A Comprehensive Study of Automatic Data Selection in Instruction Tuning},
            author={Wei Liu and Weihao Zeng and Keqing He and Yong Jiang and Junxian He},
            year={2024},
            eprint={2312.15685},
            archivePrefix={arXiv},
            primaryClass={cs.CL},
            url={https://arxiv.org/abs/2312.15685},
        }
        ```
    """

    llm: LLM

    def load(self) -> None:
        """Loads the `LLM` used to generate the embeddings."""
        super().load()

        self.llm.load()

    @property
    def inputs(self) -> List[str]:
        """The inputs for the task is a `text` column containing either a string or a
        list of dictionaries in OpenAI chat-like format."""
        return ["text"]

    @property
    def outputs(self) -> List[str]:
        """The outputs for the task is an `embedding` column containing the embedding of
        the `text` input."""
        return ["embedding", "model_name"]

    def format_input(self, input: Dict[str, Any]) -> "ChatType":
        """Formats the input to be used by the LLM to generate the embeddings. The input
        can be in `ChatType` format or a string. If a string, it will be converted to a
        list of dictionaries in OpenAI chat-like format.

        Args:
            input: The input to format.

        Returns:
            The OpenAI chat-like format of the input.
        """
        text = input["text"] = input["text"]

        # input is in `ChatType` format
        if isinstance(text, str):
            return [{"role": "user", "content": text}]

        if is_openai_format(text):
            return text

        raise ValueError(
            f"Couldn't format input for step {self.name}. The `text` input column has to"
            " be a string or a list of dictionaries in OpenAI chat-like format."
        )

    def process(self, inputs: StepInput) -> "StepOutput":  # type: ignore
        """Generates an embedding for each input using the last hidden state of the `LLM`.

        Args:
            inputs: A list of Python dictionaries with the inputs of the task.

        Yields:
            A list of Python dictionaries with the outputs of the task.
        """
        formatted_inputs = [self.format_input(input) for input in inputs]
        last_hidden_states = self.llm.get_last_hidden_states(formatted_inputs)
        for input, hidden_state in zip(inputs, last_hidden_states):
            input["embedding"] = hidden_state[-1].tolist()
            input["model_name"] = self.llm.model_name
        yield inputs

inputs: List[str] property

The inputs for the task is a text column containing either a string or a list of dictionaries in OpenAI chat-like format.

outputs: List[str] property

The outputs for the task is an embedding column containing the embedding of the text input.

format_input(input)

Formats the input to be used by the LLM to generate the embeddings. The input can be in ChatType format or a string. If a string, it will be converted to a list of dictionaries in OpenAI chat-like format.

Parameters:

Name Type Description Default
input Dict[str, Any]

The input to format.

required

Returns:

Type Description
ChatType

The OpenAI chat-like format of the input.

Source code in src/distilabel/steps/tasks/generate_embeddings.py
def format_input(self, input: Dict[str, Any]) -> "ChatType":
    """Formats the input to be used by the LLM to generate the embeddings. The input
    can be in `ChatType` format or a string. If a string, it will be converted to a
    list of dictionaries in OpenAI chat-like format.

    Args:
        input: The input to format.

    Returns:
        The OpenAI chat-like format of the input.
    """
    text = input["text"] = input["text"]

    # input is in `ChatType` format
    if isinstance(text, str):
        return [{"role": "user", "content": text}]

    if is_openai_format(text):
        return text

    raise ValueError(
        f"Couldn't format input for step {self.name}. The `text` input column has to"
        " be a string or a list of dictionaries in OpenAI chat-like format."
    )

load()

Loads the LLM used to generate the embeddings.

Source code in src/distilabel/steps/tasks/generate_embeddings.py
def load(self) -> None:
    """Loads the `LLM` used to generate the embeddings."""
    super().load()

    self.llm.load()

process(inputs)

Generates an embedding for each input using the last hidden state of the LLM.

Parameters:

Name Type Description Default
inputs StepInput

A list of Python dictionaries with the inputs of the task.

required

Yields:

Type Description
StepOutput

A list of Python dictionaries with the outputs of the task.

Source code in src/distilabel/steps/tasks/generate_embeddings.py
def process(self, inputs: StepInput) -> "StepOutput":  # type: ignore
    """Generates an embedding for each input using the last hidden state of the `LLM`.

    Args:
        inputs: A list of Python dictionaries with the inputs of the task.

    Yields:
        A list of Python dictionaries with the outputs of the task.
    """
    formatted_inputs = [self.format_input(input) for input in inputs]
    last_hidden_states = self.llm.get_last_hidden_states(formatted_inputs)
    for input, hidden_state in zip(inputs, last_hidden_states):
        input["embedding"] = hidden_state[-1].tolist()
        input["model_name"] = self.llm.model_name
    yield inputs

GenerateLongTextMatchingData

Bases: _EmbeddingDataGeneration

Generate long text matching data with an LLM to later on train an embedding model.

GenerateLongTextMatchingData is a Task that generates long text matching data with an LLM to later on train an embedding model. The task is based on the paper "Improving Text Embeddings with Large Language Models" and the data is generated based on the provided attributes, or randomly sampled if not provided.

Note

Ideally this task should be used with EmbeddingTaskGenerator with flatten_tasks=True with the category="text-matching-long"; so that the LLM generates a list of tasks that are flattened so that each row contains a single task for the text-matching-long category.

Attributes:

Name Type Description
language str

The language of the data to be generated, which can be any of the languages retrieved from the list of XLM-R in the Appendix A of https://aclanthology.org/2020.acl-main.747.pdf.

seed str

The random seed to be set in case there's any sampling within the format_input method. Note that in this task the seed has no effect since there are no sampling params.

References

Examples:

Generate synthetic long text matching data for training embedding models:

```python
from distilabel.pipeline import Pipeline
from distilabel.steps.tasks import EmbeddingTaskGenerator, GenerateLongTextMatchingData

with Pipeline("my-pipeline") as pipeline:
    task = EmbeddingTaskGenerator(
        category="text-matching-long",
        flatten_tasks=True,
        llm=...,  # LLM instance
    )

    generate = GenerateLongTextMatchingData(
        language="English",
        llm=...,  # LLM instance
    )

    task >> generate
```
Source code in src/distilabel/steps/tasks/improving_text_embeddings.py
class GenerateLongTextMatchingData(_EmbeddingDataGeneration):
    """Generate long text matching data with an `LLM` to later on train an embedding model.

    `GenerateLongTextMatchingData` is a `Task` that generates long text matching data with an
    `LLM` to later on train an embedding model. The task is based on the paper "Improving
    Text Embeddings with Large Language Models" and the data is generated based on the
    provided attributes, or randomly sampled if not provided.

    Note:
        Ideally this task should be used with `EmbeddingTaskGenerator` with `flatten_tasks=True`
        with the `category="text-matching-long"`; so that the `LLM` generates a list of tasks that
        are flattened so that each row contains a single task for the text-matching-long category.

    Attributes:
        language: The language of the data to be generated, which can be any of the languages
            retrieved from the list of XLM-R in the Appendix A of https://aclanthology.org/2020.acl-main.747.pdf.
        seed: The random seed to be set in case there's any sampling within the `format_input` method.
            Note that in this task the `seed` has no effect since there are no sampling params.

    References:
        - [Improving Text Embeddings with Large Language Models](https://arxiv.org/abs/2401.00368)

    Examples:

        Generate synthetic long text matching data for training embedding models:

        ```python
        from distilabel.pipeline import Pipeline
        from distilabel.steps.tasks import EmbeddingTaskGenerator, GenerateLongTextMatchingData

        with Pipeline("my-pipeline") as pipeline:
            task = EmbeddingTaskGenerator(
                category="text-matching-long",
                flatten_tasks=True,
                llm=...,  # LLM instance
            )

            generate = GenerateLongTextMatchingData(
                language="English",
                llm=...,  # LLM instance
            )

            task >> generate
        ```
    """

    language: str = Field(
        default="English",
        description="The languages are retrieved from the list of XLM-R in the Appendix A of https://aclanthology.org/2020.acl-main.747.pdf",
    )

    _template_name: str = PrivateAttr(default="long-text-matching")

    def format_input(self, input: Dict[str, Any]) -> ChatType:
        """Method to format the input based on the `task` and the provided attributes, or just
        randomly sampling those if not provided. This method will render the `_template` with
        the provided arguments and return an OpenAI formatted chat i.e. a `ChatType`, assuming that
        there's only one turn, being from the user with the content being the rendered `_template`.

        Args:
            input: The input dictionary containing the `task` to be used in the `_template`.

        Returns:
            A list with a single chat containing the user's message with the rendered `_template`.
        """
        return [
            {
                "role": "user",
                "content": self._template.render(  # type: ignore
                    task=input["task"],
                    language=self.language,
                ).strip(),
            }
        ]

    @property
    def keys(self) -> List[str]:
        """Contains the `keys` that will be parsed from the `LLM` output into a Python dict."""
        return ["input", "positive_document"]

keys: List[str] property

Contains the keys that will be parsed from the LLM output into a Python dict.

format_input(input)

Method to format the input based on the task and the provided attributes, or just randomly sampling those if not provided. This method will render the _template with the provided arguments and return an OpenAI formatted chat i.e. a ChatType, assuming that there's only one turn, being from the user with the content being the rendered _template.

Parameters:

Name Type Description Default
input Dict[str, Any]

The input dictionary containing the task to be used in the _template.

required

Returns:

Type Description
ChatType

A list with a single chat containing the user's message with the rendered _template.

Source code in src/distilabel/steps/tasks/improving_text_embeddings.py
def format_input(self, input: Dict[str, Any]) -> ChatType:
    """Method to format the input based on the `task` and the provided attributes, or just
    randomly sampling those if not provided. This method will render the `_template` with
    the provided arguments and return an OpenAI formatted chat i.e. a `ChatType`, assuming that
    there's only one turn, being from the user with the content being the rendered `_template`.

    Args:
        input: The input dictionary containing the `task` to be used in the `_template`.

    Returns:
        A list with a single chat containing the user's message with the rendered `_template`.
    """
    return [
        {
            "role": "user",
            "content": self._template.render(  # type: ignore
                task=input["task"],
                language=self.language,
            ).strip(),
        }
    ]

GenerateSentencePair

Bases: Task

Generate a positive and negative (optionally) sentences given an anchor sentence.

GenerateSentencePair is a pre-defined task that given an anchor sentence generates a positive sentence related to the anchor and optionally a negative sentence unrelated to the anchor or similar to it. Optionally, you can give a context to guide the LLM towards more specific behavior. This task is useful to generate training datasets for training embeddings models.

Attributes:

Name Type Description
triplet bool

a flag to indicate if the task should generate a triplet of sentences (anchor, positive, negative). Defaults to False.

action GenerationAction

the action to perform to generate the positive sentence.

context str

the context to use for the generation. Can be helpful to guide the LLM towards more specific context. Not used by default.

hard_negative bool

A flag to indicate if the negative should be a hard-negative or not. Hard negatives make it hard for the model to distinguish against the positive, with a higher degree of semantic similarity.

Input columns
  • anchor (str): The anchor sentence to generate the positive and negative sentences.
Output columns
  • positive (str): The positive sentence related to the anchor.
  • negative (str): The negative sentence unrelated to the anchor if triplet=True, or more similar to the positive to make it more challenging for a model to distinguish in case hard_negative=True.
  • model_name (str): The name of the model that was used to generate the sentences.
Categories
  • embedding

Examples:

Paraphrasing:

```python
from distilabel.steps.tasks import GenerateSentencePair
from distilabel.llms import InferenceEndpointsLLM

generate_sentence_pair = GenerateSentencePair(
    triplet=True, # `False` to generate only positive
    action="paraphrase",
    llm=InferenceEndpointsLLM(
        model_id="meta-llama/Meta-Llama-3.1-70B-Instruct",
        tokenizer_id="meta-llama/Meta-Llama-3.1-70B-Instruct",
    ),
    input_batch_size=10,
)

generate_sentence_pair.load()

result = generate_sentence_pair.process([{"anchor": "What Game of Thrones villain would be the most likely to give you mercy?"}])
```

Generating semantically similar sentences:

```python
from distilabel.llms import InferenceEndpointsLLM
from distilabel.steps.tasks import GenerateSentencePair

generate_sentence_pair = GenerateSentencePair(
    triplet=True, # `False` to generate only positive
    action="semantically-similar",
    llm=InferenceEndpointsLLM(
        model_id="meta-llama/Meta-Llama-3.1-70B-Instruct",
        tokenizer_id="meta-llama/Meta-Llama-3.1-70B-Instruct",
    ),
    input_batch_size=10,
)

generate_sentence_pair.load()

result = generate_sentence_pair.process([{"anchor": "How does 3D printing work?"}])
```

Generating queries:

```python
from distilabel.steps.tasks import GenerateSentencePair
from distilabel.llms import InferenceEndpointsLLM

generate_sentence_pair = GenerateSentencePair(
    triplet=True, # `False` to generate only positive
    action="query",
    llm=InferenceEndpointsLLM(
        model_id="meta-llama/Meta-Llama-3.1-70B-Instruct",
        tokenizer_id="meta-llama/Meta-Llama-3.1-70B-Instruct",
    ),
    input_batch_size=10,
)

generate_sentence_pair.load()

result = generate_sentence_pair.process([{"anchor": "Argilla is an open-source data curation platform for LLMs. Using Argilla, ..."}])
```

Generating answers:

```python
from distilabel.steps.tasks import GenerateSentencePair
from distilabel.llms import InferenceEndpointsLLM

generate_sentence_pair = GenerateSentencePair(
    triplet=True, # `False` to generate only positive
    action="answer",
    llm=InferenceEndpointsLLM(
        model_id="meta-llama/Meta-Llama-3.1-70B-Instruct",
        tokenizer_id="meta-llama/Meta-Llama-3.1-70B-Instruct",
    ),
    input_batch_size=10,
)

generate_sentence_pair.load()

result = generate_sentence_pair.process([{"anchor": "What Game of Thrones villain would be the most likely to give you mercy?"}])
```

Generating queries with context (**applies to every action**):

```python
from distilabel.steps.tasks import GenerateSentencePair
from distilabel.llms import InferenceEndpointsLLM

generate_sentence_pair = GenerateSentencePair(
    triplet=True, # `False` to generate only positive
    action="query",
    context="Argilla is an open-source data curation platform for LLMs.",
    llm=InferenceEndpointsLLM(
        model_id="meta-llama/Meta-Llama-3.1-70B-Instruct",
        tokenizer_id="meta-llama/Meta-Llama-3.1-70B-Instruct",
    ),
    input_batch_size=10,
)

generate_sentence_pair.load()

result = generate_sentence_pair.process([{"anchor": "I want to generate queries for my LLM."}])
```

Generating Hard-negatives (**applies to every action**):

```python
from distilabel.steps.tasks import GenerateSentencePair
from distilabel.llms import InferenceEndpointsLLM

generate_sentence_pair = GenerateSentencePair(
    triplet=True, # `False` to generate only positive
    action="query",
    context="Argilla is an open-source data curation platform for LLMs.",
    hard_negative=True,
    llm=InferenceEndpointsLLM(
        model_id="meta-llama/Meta-Llama-3.1-70B-Instruct",
        tokenizer_id="meta-llama/Meta-Llama-3.1-70B-Instruct",
    ),
    input_batch_size=10,
)

generate_sentence_pair.load()

result = generate_sentence_pair.process([{"anchor": "I want to generate queries for my LLM."}])
```
Source code in src/distilabel/steps/tasks/sentence_transformers.py
class GenerateSentencePair(Task):
    """Generate a positive and negative (optionally) sentences given an anchor sentence.

    `GenerateSentencePair` is a pre-defined task that given an anchor sentence generates
    a positive sentence related to the anchor and optionally a negative sentence unrelated
    to the anchor or similar to it. Optionally, you can give a context to guide the LLM
    towards more specific behavior. This task is useful to generate training datasets for
    training embeddings models.

    Attributes:
        triplet: a flag to indicate if the task should generate a triplet of sentences
            (anchor, positive, negative). Defaults to `False`.
        action: the action to perform to generate the positive sentence.
        context: the context to use for the generation. Can be helpful to guide the LLM
            towards more specific context. Not used by default.
        hard_negative: A flag to indicate if the negative should be a hard-negative or not.
            Hard negatives make it hard for the model to distinguish against the positive,
            with a higher degree of semantic similarity.

    Input columns:
        - anchor (`str`): The anchor sentence to generate the positive and negative sentences.

    Output columns:
        - positive (`str`): The positive sentence related to the `anchor`.
        - negative (`str`): The negative sentence unrelated to the `anchor` if `triplet=True`,
            or more similar to the positive to make it more challenging for a model to distinguish
            in case `hard_negative=True`.
        - model_name (`str`): The name of the model that was used to generate the sentences.

    Categories:
        - embedding

    Examples:

        Paraphrasing:

        ```python
        from distilabel.steps.tasks import GenerateSentencePair
        from distilabel.llms import InferenceEndpointsLLM

        generate_sentence_pair = GenerateSentencePair(
            triplet=True, # `False` to generate only positive
            action="paraphrase",
            llm=InferenceEndpointsLLM(
                model_id="meta-llama/Meta-Llama-3.1-70B-Instruct",
                tokenizer_id="meta-llama/Meta-Llama-3.1-70B-Instruct",
            ),
            input_batch_size=10,
        )

        generate_sentence_pair.load()

        result = generate_sentence_pair.process([{"anchor": "What Game of Thrones villain would be the most likely to give you mercy?"}])
        ```

        Generating semantically similar sentences:

        ```python
        from distilabel.llms import InferenceEndpointsLLM
        from distilabel.steps.tasks import GenerateSentencePair

        generate_sentence_pair = GenerateSentencePair(
            triplet=True, # `False` to generate only positive
            action="semantically-similar",
            llm=InferenceEndpointsLLM(
                model_id="meta-llama/Meta-Llama-3.1-70B-Instruct",
                tokenizer_id="meta-llama/Meta-Llama-3.1-70B-Instruct",
            ),
            input_batch_size=10,
        )

        generate_sentence_pair.load()

        result = generate_sentence_pair.process([{"anchor": "How does 3D printing work?"}])
        ```

        Generating queries:

        ```python
        from distilabel.steps.tasks import GenerateSentencePair
        from distilabel.llms import InferenceEndpointsLLM

        generate_sentence_pair = GenerateSentencePair(
            triplet=True, # `False` to generate only positive
            action="query",
            llm=InferenceEndpointsLLM(
                model_id="meta-llama/Meta-Llama-3.1-70B-Instruct",
                tokenizer_id="meta-llama/Meta-Llama-3.1-70B-Instruct",
            ),
            input_batch_size=10,
        )

        generate_sentence_pair.load()

        result = generate_sentence_pair.process([{"anchor": "Argilla is an open-source data curation platform for LLMs. Using Argilla, ..."}])
        ```

        Generating answers:

        ```python
        from distilabel.steps.tasks import GenerateSentencePair
        from distilabel.llms import InferenceEndpointsLLM

        generate_sentence_pair = GenerateSentencePair(
            triplet=True, # `False` to generate only positive
            action="answer",
            llm=InferenceEndpointsLLM(
                model_id="meta-llama/Meta-Llama-3.1-70B-Instruct",
                tokenizer_id="meta-llama/Meta-Llama-3.1-70B-Instruct",
            ),
            input_batch_size=10,
        )

        generate_sentence_pair.load()

        result = generate_sentence_pair.process([{"anchor": "What Game of Thrones villain would be the most likely to give you mercy?"}])
        ```

        Generating queries with context (**applies to every action**):

        ```python
        from distilabel.steps.tasks import GenerateSentencePair
        from distilabel.llms import InferenceEndpointsLLM

        generate_sentence_pair = GenerateSentencePair(
            triplet=True, # `False` to generate only positive
            action="query",
            context="Argilla is an open-source data curation platform for LLMs.",
            llm=InferenceEndpointsLLM(
                model_id="meta-llama/Meta-Llama-3.1-70B-Instruct",
                tokenizer_id="meta-llama/Meta-Llama-3.1-70B-Instruct",
            ),
            input_batch_size=10,
        )

        generate_sentence_pair.load()

        result = generate_sentence_pair.process([{"anchor": "I want to generate queries for my LLM."}])
        ```

        Generating Hard-negatives (**applies to every action**):

        ```python
        from distilabel.steps.tasks import GenerateSentencePair
        from distilabel.llms import InferenceEndpointsLLM

        generate_sentence_pair = GenerateSentencePair(
            triplet=True, # `False` to generate only positive
            action="query",
            context="Argilla is an open-source data curation platform for LLMs.",
            hard_negative=True,
            llm=InferenceEndpointsLLM(
                model_id="meta-llama/Meta-Llama-3.1-70B-Instruct",
                tokenizer_id="meta-llama/Meta-Llama-3.1-70B-Instruct",
            ),
            input_batch_size=10,
        )

        generate_sentence_pair.load()

        result = generate_sentence_pair.process([{"anchor": "I want to generate queries for my LLM."}])
        ```

    """

    triplet: bool = False
    action: GenerationAction
    hard_negative: bool = False
    context: str = ""

    def load(self) -> None:
        """Loads the Jinja2 template."""
        super().load()

        _path = str(
            importlib_resources.files("distilabel")
            / "steps"
            / "tasks"
            / "templates"
            / "generate-sentence-pair.jinja2"
        )

        self._template = Template(open(_path).read())

    @property
    def inputs(self) -> List[str]:
        """The inputs for the task is the `anchor` sentence."""
        return ["anchor"]

    def format_input(self, input: Dict[str, Any]) -> "ChatType":
        """The inputs are formatted as a `ChatType`, with a system prompt describing the
        task of generating a positive and negative sentences for the anchor sentence. The
        anchor is provided as the first user interaction in the conversation.

        Args:
            input: The input containing the `anchor` sentence.

        Returns:
            A list of dictionaries containing the system and user interactions.
        """
        action_sentence = GENERATION_ACTION_SENTENCES[self.action]

        format_system_prompt = {
            "action_sentence": action_sentence,
            "context": CONTEXT_INTRO if self.context else "",
        }
        if self.triplet:
            format_system_prompt["negative_style"] = NEGATIVE_STYLE[
                "hard-negative" if self.hard_negative else "negative"
            ]

        system_prompt = (
            POSITIVE_NEGATIVE_SYSTEM_PROMPT if self.triplet else POSITIVE_SYSTEM_PROMPT
        ).format(**format_system_prompt)

        return [
            {"role": "system", "content": system_prompt},
            {
                "role": "user",
                "content": self._template.render(
                    anchor=input["anchor"],
                    context=self.context if self.context else None,
                ),
            },
        ]

    @property
    def outputs(self) -> List[str]:
        """The outputs for the task are the `positive` and `negative` sentences, as well
        as the `model_name` used to generate the sentences."""
        columns = ["positive", "negative"] if self.triplet else ["positive"]
        columns += ["model_name"]
        return columns

    def format_output(
        self, output: Union[str, None], input: Optional[Dict[str, Any]] = None
    ) -> Dict[str, Any]:
        """Formats the output of the LLM, to extract the `positive` and `negative` sentences
        generated. If the output is `None` or the regex doesn't match, then the outputs
        will be set to `None` as well.

        Args:
            output: The output of the LLM.
            input: The input used to generate the output.

        Returns:
            The formatted output containing the `positive` and `negative` sentences.
        """
        if output is None:
            return {"positive": None, "negative": None}

        match = POSITIVE_NEGATIVE_PAIR_REGEX.match(output)
        if match is None:
            formatted_output = {"positive": None}
            if self.triplet:
                formatted_output["negative"] = None
            return formatted_output

        groups = match.groups()
        if self.triplet:
            return {
                "positive": groups[0].strip(),
                "negative": groups[1].strip()
                if len(groups) > 1 and groups[1] is not None
                else None,
            }

        return {"positive": groups[0].strip()}

inputs: List[str] property

The inputs for the task is the anchor sentence.

outputs: List[str] property

The outputs for the task are the positive and negative sentences, as well as the model_name used to generate the sentences.

format_input(input)

The inputs are formatted as a ChatType, with a system prompt describing the task of generating a positive and negative sentences for the anchor sentence. The anchor is provided as the first user interaction in the conversation.

Parameters:

Name Type Description Default
input Dict[str, Any]

The input containing the anchor sentence.

required

Returns:

Type Description
ChatType

A list of dictionaries containing the system and user interactions.

Source code in src/distilabel/steps/tasks/sentence_transformers.py
def format_input(self, input: Dict[str, Any]) -> "ChatType":
    """The inputs are formatted as a `ChatType`, with a system prompt describing the
    task of generating a positive and negative sentences for the anchor sentence. The
    anchor is provided as the first user interaction in the conversation.

    Args:
        input: The input containing the `anchor` sentence.

    Returns:
        A list of dictionaries containing the system and user interactions.
    """
    action_sentence = GENERATION_ACTION_SENTENCES[self.action]

    format_system_prompt = {
        "action_sentence": action_sentence,
        "context": CONTEXT_INTRO if self.context else "",
    }
    if self.triplet:
        format_system_prompt["negative_style"] = NEGATIVE_STYLE[
            "hard-negative" if self.hard_negative else "negative"
        ]

    system_prompt = (
        POSITIVE_NEGATIVE_SYSTEM_PROMPT if self.triplet else POSITIVE_SYSTEM_PROMPT
    ).format(**format_system_prompt)

    return [
        {"role": "system", "content": system_prompt},
        {
            "role": "user",
            "content": self._template.render(
                anchor=input["anchor"],
                context=self.context if self.context else None,
            ),
        },
    ]

format_output(output, input=None)

Formats the output of the LLM, to extract the positive and negative sentences generated. If the output is None or the regex doesn't match, then the outputs will be set to None as well.

Parameters:

Name Type Description Default
output Union[str, None]

The output of the LLM.

required
input Optional[Dict[str, Any]]

The input used to generate the output.

None

Returns:

Type Description
Dict[str, Any]

The formatted output containing the positive and negative sentences.

Source code in src/distilabel/steps/tasks/sentence_transformers.py
def format_output(
    self, output: Union[str, None], input: Optional[Dict[str, Any]] = None
) -> Dict[str, Any]:
    """Formats the output of the LLM, to extract the `positive` and `negative` sentences
    generated. If the output is `None` or the regex doesn't match, then the outputs
    will be set to `None` as well.

    Args:
        output: The output of the LLM.
        input: The input used to generate the output.

    Returns:
        The formatted output containing the `positive` and `negative` sentences.
    """
    if output is None:
        return {"positive": None, "negative": None}

    match = POSITIVE_NEGATIVE_PAIR_REGEX.match(output)
    if match is None:
        formatted_output = {"positive": None}
        if self.triplet:
            formatted_output["negative"] = None
        return formatted_output

    groups = match.groups()
    if self.triplet:
        return {
            "positive": groups[0].strip(),
            "negative": groups[1].strip()
            if len(groups) > 1 and groups[1] is not None
            else None,
        }

    return {"positive": groups[0].strip()}

load()

Loads the Jinja2 template.

Source code in src/distilabel/steps/tasks/sentence_transformers.py
def load(self) -> None:
    """Loads the Jinja2 template."""
    super().load()

    _path = str(
        importlib_resources.files("distilabel")
        / "steps"
        / "tasks"
        / "templates"
        / "generate-sentence-pair.jinja2"
    )

    self._template = Template(open(_path).read())

GenerateShortTextMatchingData

Bases: _EmbeddingDataGeneration

Generate short text matching data with an LLM to later on train an embedding model.

GenerateShortTextMatchingData is a Task that generates short text matching data with an LLM to later on train an embedding model. The task is based on the paper "Improving Text Embeddings with Large Language Models" and the data is generated based on the provided attributes, or randomly sampled if not provided.

Note

Ideally this task should be used with EmbeddingTaskGenerator with flatten_tasks=True with the category="text-matching-short"; so that the LLM generates a list of tasks that are flattened so that each row contains a single task for the text-matching-short category.

Attributes:

Name Type Description
language str

The language of the data to be generated, which can be any of the languages retrieved from the list of XLM-R in the Appendix A of https://aclanthology.org/2020.acl-main.747.pdf.

seed str

The random seed to be set in case there's any sampling within the format_input method. Note that in this task the seed has no effect since there are no sampling params.

References

Examples:

Generate synthetic short text matching data for training embedding models:

```python
from distilabel.pipeline import Pipeline
from distilabel.steps.tasks import EmbeddingTaskGenerator, GenerateShortTextMatchingData

with Pipeline("my-pipeline") as pipeline:
    task = EmbeddingTaskGenerator(
        category="text-matching-short",
        flatten_tasks=True,
        llm=...,  # LLM instance
    )

    generate = GenerateShortTextMatchingData(
        language="English",
        llm=...,  # LLM instance
    )

    task >> generate
```
Source code in src/distilabel/steps/tasks/improving_text_embeddings.py
class GenerateShortTextMatchingData(_EmbeddingDataGeneration):
    """Generate short text matching data with an `LLM` to later on train an embedding model.

    `GenerateShortTextMatchingData` is a `Task` that generates short text matching data with an
    `LLM` to later on train an embedding model. The task is based on the paper "Improving
    Text Embeddings with Large Language Models" and the data is generated based on the
    provided attributes, or randomly sampled if not provided.

    Note:
        Ideally this task should be used with `EmbeddingTaskGenerator` with `flatten_tasks=True`
        with the `category="text-matching-short"`; so that the `LLM` generates a list of tasks that
        are flattened so that each row contains a single task for the text-matching-short category.

    Attributes:
        language: The language of the data to be generated, which can be any of the languages
            retrieved from the list of XLM-R in the Appendix A of https://aclanthology.org/2020.acl-main.747.pdf.
        seed: The random seed to be set in case there's any sampling within the `format_input` method.
            Note that in this task the `seed` has no effect since there are no sampling params.

    References:
        - [Improving Text Embeddings with Large Language Models](https://arxiv.org/abs/2401.00368)

    Examples:

        Generate synthetic short text matching data for training embedding models:

        ```python
        from distilabel.pipeline import Pipeline
        from distilabel.steps.tasks import EmbeddingTaskGenerator, GenerateShortTextMatchingData

        with Pipeline("my-pipeline") as pipeline:
            task = EmbeddingTaskGenerator(
                category="text-matching-short",
                flatten_tasks=True,
                llm=...,  # LLM instance
            )

            generate = GenerateShortTextMatchingData(
                language="English",
                llm=...,  # LLM instance
            )

            task >> generate
        ```
    """

    language: str = Field(
        default="English",
        description="The languages are retrieved from the list of XLM-R in the Appendix A of https://aclanthology.org/2020.acl-main.747.pdf",
    )

    _template_name: str = PrivateAttr(default="short-text-matching")

    def format_input(self, input: Dict[str, Any]) -> ChatType:
        """Method to format the input based on the `task` and the provided attributes, or just
        randomly sampling those if not provided. This method will render the `_template` with
                the provided arguments and return an OpenAI formatted chat i.e. a `ChatType`, assuming that
                there's only one turn, being from the user with the content being the rendered `_template`.

                Args:
                    input: The input dictionary containing the `task` to be used in the `_template`.

                Returns:
                    A list with a single chat containing the user's message with the rendered `_template`.
        """
        return [
            {
                "role": "user",
                "content": self._template.render(  # type: ignore
                    task=input["task"],
                    language=self.language,
                ).strip(),
            }
        ]

    @property
    def keys(self) -> List[str]:
        """Contains the `keys` that will be parsed from the `LLM` output into a Python dict."""
        return ["input", "positive_document"]

keys: List[str] property

Contains the keys that will be parsed from the LLM output into a Python dict.

format_input(input)

Method to format the input based on the task and the provided attributes, or just randomly sampling those if not provided. This method will render the _template with the provided arguments and return an OpenAI formatted chat i.e. a ChatType, assuming that there's only one turn, being from the user with the content being the rendered _template.

    Args:
        input: The input dictionary containing the `task` to be used in the `_template`.

    Returns:
        A list with a single chat containing the user's message with the rendered `_template`.
Source code in src/distilabel/steps/tasks/improving_text_embeddings.py
def format_input(self, input: Dict[str, Any]) -> ChatType:
    """Method to format the input based on the `task` and the provided attributes, or just
    randomly sampling those if not provided. This method will render the `_template` with
            the provided arguments and return an OpenAI formatted chat i.e. a `ChatType`, assuming that
            there's only one turn, being from the user with the content being the rendered `_template`.

            Args:
                input: The input dictionary containing the `task` to be used in the `_template`.

            Returns:
                A list with a single chat containing the user's message with the rendered `_template`.
    """
    return [
        {
            "role": "user",
            "content": self._template.render(  # type: ignore
                task=input["task"],
                language=self.language,
            ).strip(),
        }
    ]

GenerateTextClassificationData

Bases: _EmbeddingDataGeneration

Generate text classification data with an LLM to later on train an embedding model.

GenerateTextClassificationData is a Task that generates text classification data with an LLM to later on train an embedding model. The task is based on the paper "Improving Text Embeddings with Large Language Models" and the data is generated based on the provided attributes, or randomly sampled if not provided.

Note

Ideally this task should be used with EmbeddingTaskGenerator with flatten_tasks=True with the category="text-classification"; so that the LLM generates a list of tasks that are flattened so that each row contains a single task for the text-classification category.

Attributes:

Name Type Description
language str

The language of the data to be generated, which can be any of the languages retrieved from the list of XLM-R in the Appendix A of https://aclanthology.org/2020.acl-main.747.pdf.

difficulty Optional[Literal['high school', 'college', 'PhD']]

The difficulty of the query to be generated, which can be high school, college, or PhD. Defaults to None, meaning that it will be randomly sampled.

clarity Optional[Literal['clear', 'understandable with some effort', 'ambiguous']]

The clarity of the query to be generated, which can be clear, understandable with some effort, or ambiguous. Defaults to None, meaning that it will be randomly sampled.

seed Optional[Literal['clear', 'understandable with some effort', 'ambiguous']]

The random seed to be set in case there's any sampling within the format_input method.

References

Examples:

Generate synthetic text classification data for training embedding models:

```python
from distilabel.pipeline import Pipeline
from distilabel.steps.tasks import EmbeddingTaskGenerator, GenerateTextClassificationData

with Pipeline("my-pipeline") as pipeline:
    task = EmbeddingTaskGenerator(
        category="text-classification",
        flatten_tasks=True,
        llm=...,  # LLM instance
    )

    generate = GenerateTextClassificationData(
        language="English",
        difficulty="high school",
        clarity="clear",
        llm=...,  # LLM instance
    )

    task >> generate
```
Source code in src/distilabel/steps/tasks/improving_text_embeddings.py
class GenerateTextClassificationData(_EmbeddingDataGeneration):
    """Generate text classification data with an `LLM` to later on train an embedding model.

    `GenerateTextClassificationData` is a `Task` that generates text classification data with an
    `LLM` to later on train an embedding model. The task is based on the paper "Improving
    Text Embeddings with Large Language Models" and the data is generated based on the
    provided attributes, or randomly sampled if not provided.

    Note:
        Ideally this task should be used with `EmbeddingTaskGenerator` with `flatten_tasks=True`
        with the `category="text-classification"`; so that the `LLM` generates a list of tasks that
        are flattened so that each row contains a single task for the text-classification category.

    Attributes:
        language: The language of the data to be generated, which can be any of the languages
            retrieved from the list of XLM-R in the Appendix A of https://aclanthology.org/2020.acl-main.747.pdf.
        difficulty: The difficulty of the query to be generated, which can be `high school`, `college`, or `PhD`.
            Defaults to `None`, meaning that it will be randomly sampled.
        clarity: The clarity of the query to be generated, which can be `clear`, `understandable with some effort`,
            or `ambiguous`. Defaults to `None`, meaning that it will be randomly sampled.
        seed: The random seed to be set in case there's any sampling within the `format_input` method.

    References:
        - [Improving Text Embeddings with Large Language Models](https://arxiv.org/abs/2401.00368)

    Examples:

        Generate synthetic text classification data for training embedding models:

        ```python
        from distilabel.pipeline import Pipeline
        from distilabel.steps.tasks import EmbeddingTaskGenerator, GenerateTextClassificationData

        with Pipeline("my-pipeline") as pipeline:
            task = EmbeddingTaskGenerator(
                category="text-classification",
                flatten_tasks=True,
                llm=...,  # LLM instance
            )

            generate = GenerateTextClassificationData(
                language="English",
                difficulty="high school",
                clarity="clear",
                llm=...,  # LLM instance
            )

            task >> generate
        ```
    """

    language: str = Field(
        default="English",
        description="The languages are retrieved from the list of XLM-R in the Appendix A of https://aclanthology.org/2020.acl-main.747.pdf",
    )

    difficulty: Optional[Literal["high school", "college", "PhD"]] = None
    clarity: Optional[
        Literal["clear", "understandable with some effort", "ambiguous"]
    ] = None

    _template_name: str = PrivateAttr(default="text-classification")

    def format_input(self, input: Dict[str, Any]) -> ChatType:
        """Method to format the input based on the `task` and the provided attributes, or just
        randomly sampling those if not provided. This method will render the `_template` with
        the provided arguments and return an OpenAI formatted chat i.e. a `ChatType`, assuming that
        there's only one turn, being from the user with the content being the rendered `_template`.

        Args:
            input: The input dictionary containing the `task` to be used in the `_template`.

        Returns:
            A list with a single chat containing the user's message with the rendered `_template`.
        """
        return [
            {
                "role": "user",
                "content": self._template.render(  # type: ignore
                    task=input["task"],
                    language=self.language,
                    difficulty=self.difficulty
                    or random.choice(["high school", "college", "PhD"]),
                    clarity=self.clarity
                    or random.choice(
                        ["clear", "understandable with some effort", "ambiguous"]
                    ),
                ).strip(),
            }
        ]

    @property
    def keys(self) -> List[str]:
        """Contains the `keys` that will be parsed from the `LLM` output into a Python dict."""
        return ["input_text", "label", "misleading_label"]

keys: List[str] property

Contains the keys that will be parsed from the LLM output into a Python dict.

format_input(input)

Method to format the input based on the task and the provided attributes, or just randomly sampling those if not provided. This method will render the _template with the provided arguments and return an OpenAI formatted chat i.e. a ChatType, assuming that there's only one turn, being from the user with the content being the rendered _template.

Parameters:

Name Type Description Default
input Dict[str, Any]

The input dictionary containing the task to be used in the _template.

required

Returns:

Type Description
ChatType

A list with a single chat containing the user's message with the rendered _template.

Source code in src/distilabel/steps/tasks/improving_text_embeddings.py
def format_input(self, input: Dict[str, Any]) -> ChatType:
    """Method to format the input based on the `task` and the provided attributes, or just
    randomly sampling those if not provided. This method will render the `_template` with
    the provided arguments and return an OpenAI formatted chat i.e. a `ChatType`, assuming that
    there's only one turn, being from the user with the content being the rendered `_template`.

    Args:
        input: The input dictionary containing the `task` to be used in the `_template`.

    Returns:
        A list with a single chat containing the user's message with the rendered `_template`.
    """
    return [
        {
            "role": "user",
            "content": self._template.render(  # type: ignore
                task=input["task"],
                language=self.language,
                difficulty=self.difficulty
                or random.choice(["high school", "college", "PhD"]),
                clarity=self.clarity
                or random.choice(
                    ["clear", "understandable with some effort", "ambiguous"]
                ),
            ).strip(),
        }
    ]

GenerateTextRetrievalData

Bases: _EmbeddingDataGeneration

Generate text retrieval data with an LLM to later on train an embedding model.

GenerateTextRetrievalData is a Task that generates text retrieval data with an LLM to later on train an embedding model. The task is based on the paper "Improving Text Embeddings with Large Language Models" and the data is generated based on the provided attributes, or randomly sampled if not provided.

Note

Ideally this task should be used with EmbeddingTaskGenerator with flatten_tasks=True with the category="text-retrieval"; so that the LLM generates a list of tasks that are flattened so that each row contains a single task for the text-retrieval category.

Attributes:

Name Type Description
language str

The language of the data to be generated, which can be any of the languages retrieved from the list of XLM-R in the Appendix A of https://aclanthology.org/2020.acl-main.747.pdf.

query_type Optional[Literal['extremely long-tail', 'long-tail', 'common']]