ultracm
UltraCMTask
dataclass
¶
Bases: CritiqueTask
A CritiqueTask
following the prompt templated used by UltraCM (from UltraFeedback).
Parameters:
Name | Type | Description | Default |
---|---|---|---|
system_prompt |
str
|
the system prompt to be used for generation. Defaults to |
"User: A one-turn chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, very detailed, and polite answers to the user's questions.</s>"
|
Disclaimer
Since the UltraCM model has been trained with OpenAI API generated data, the prompting strategy may just be consistent / compliant with either GPT-3.5 or GPT-4 from OpenAI API, or with their own model. Any other model may fail on the generation of a structured output, as well as providing an incorrect / inaccurate critique.
References
Source code in src/distilabel/tasks/critique/ultracm.py
generate_prompt(input, generations, **_)
¶
Generates a prompt following the UltraCM specification.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input |
str
|
the input to be used for the prompt. |
required |
generations |
List[str]
|
the generations to be used for the prompt, in this case, the ones to be critiqued. |
required |
Returns:
Name | Type | Description |
---|---|---|
Prompt |
Prompt
|
the generated prompt. |
Examples:
>>> from distilabel.tasks.critique import UltraCMTask
>>> task = UltraCMTask()
>>> task.generate_prompt(
... input="What are the first 5 Fibonacci numbers?",
... generations=["0 1 1 2 3", "0 1 1 2 3"],
... )
Prompt(
system_prompt="User: A one-turn chat between a curious user ...",
formatted_prompt="User: Given my answer to an instruction, your role ...",
)
Source code in src/distilabel/tasks/critique/ultracm.py
parse_output(output)
¶
Parses the output of the model into the desired format.