judgelm
JudgeLMOutput
¶
JudgeLMTask
dataclass
¶
Bases: PreferenceTask
A PreferenceTask
following the prompt templated used by JudgeLM.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
system_prompt |
str
|
the system prompt to be used for generation. Defaults to |
'You are a helpful and precise assistant for checking the quality of the answer.'
|
task_description |
Union[str, None]
|
the description of the task. Defaults to |
'We would like to request your feedback on the performance of {num_responses} AI assistants in response to the user question displayed above.\nPlease rate the helpfulness, relevance, accuracy, level of details of their responses. Each assistant receives an overall score on a scale of 1 to 10, where a higher score indicates better overall performance.\nPlease first output a single line containing only {num_responses} values indicating the scores for Assistants 1 to {num_responses}, respectively. The {num_responses} scores are separated by a space. In the subsequent line, please provide a comprehensive explanation of your evaluation, avoiding any potential bias and ensuring that the order in which the responses were presented does not affect your judgment.'
|
References
Source code in src/distilabel/tasks/preference/judgelm.py
generate_prompt(input, generations, **_)
¶
Generates a prompt following the JudgeLM specification.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input |
str
|
the input to be used for the prompt. |
required |
generations |
List[str]
|
the generations to be used for the prompt. |
required |
Returns:
Name | Type | Description |
---|---|---|
Prompt |
Prompt
|
the generated prompt. |
Examples:
>>> from distilabel.tasks.preference import JudgeLMTask
>>> task = JudgeLMTask(system_prompt="You are a helpful assistant.")
>>> task.generate_prompt("What are the first 5 Fibonacci numbers?", ["0 1 1 2 3", "0 1 1 2 3"])
Prompt(
system_prompt="You are a helpful assistant.",
formatted_prompt="[Question] What are the first 5 Fibonacci numbers? ...",
)
Source code in src/distilabel/tasks/preference/judgelm.py
parse_output(output)
¶
Parses the output of the model into the desired format.