Skip to content

⚖️ Create a legal preference dataset

Open In Colab Open Source in Github

In this tutorial, you will learn how to use the Notus model on Inference Endpoints to create a legal preference dataset based on RAG instructions from the European AI Act. A full end-to-end example of how to use distilabel to leverage LLMs!

distilabel is an AI Feedback (AIF) framework that can generate and label datasets using LLMs, and can be used for many different use cases. Implemented with robustness, efficiency and scalability in mind, it allows anyone to build their synthetic datasets that can be used in many different scenarios. This tutorial shows an end-to-end example in which we will create a model expert in the new AI Act, to which we can make different types of questions and requests.

The LLM model that we will fine-tune for this is Notus 7B, a fine-tuned version of Zephyr 7B that uses Direct Preference Optimization (DPO) and AIF techniques to outperform its foundation model in several benchmarks, and is completely open-source.

This tutorial includes the following steps:

  • Defining a custom generating task for a distilabel pipeline.
  • Creating a RAG pipeline using Haystack for the EU AI Act.
  • Generating an instruction dataset with SelfInstructTask.
  • Generating a preference dataset using an UltraFeedback text quality task.

You can use the Open in Colab button at the top of this page. This option allows you to run the notebook directly on Google Colab. Don't forget to change the runtime type to GPU for faster model training and inference.

Introduction

Let's start by installing the required depencies to run distilabel, Argilla and the rest of the packages used in the tutorial; most notably, Haystack.

%pip install  distilabel "farm-haystack[preprocessing]" pip install "distilabel[hf-inference-endpoints]"

Import dependencies

The main dependencies for this tutorial are distilabel for creating the synthetic datasets and Argilla for visualizing and annotating these datasets, and also for fine-tuning our model. The package Haystack is used to creates batches from the original PDF document we want to create our datasets from.

import os
from typing import Dict

from distilabel.llm import InferenceEndpointsLLM
from distilabel.pipeline import Pipeline, pipeline
from distilabel.tasks import TextGenerationTask, SelfInstructTask, Prompt

from datasets import Dataset
from haystack.nodes import PDFToTextConverter, PreProcessor

Environment variables

Additionally, we need to provide our HuggingFace and OpenAI accest token. To later instatiate an InferenceEndpointsLLM object, we need to pass as parameters the HF Inference Endpoint name and the HF namespace. One very convenient way to do so is also through environment variables.

os.environ["HF_TOKEN"] = ""
os.environ["HF_INFERENCE_ENDPOINT_NAME"] = "aws-notus-7b-v1-3184"
os.environ["HF_NAMESPACE"] = "argilla"
os.environ["OPENAI_API_KEY"] = ""

Setting up an inference endpoint with Notus

Inference endpoints are a solution, managed by Hugging Face, to easily deploy any Transformer-like model. They are built from models on the Hugging Face Hub. Inference endpoints are really handy for making inference on LLMs without the hastle of trying to run the models locally. In this tutorial, we will use inference endpoints to generate text using our Notus model, as part of the distilabel workflow. The endpoint of choice has a Notus 7B instance running.

Defining a custom generating task for a distilabel pipeline

To kickstart this tutorial, let's see how to set up and endpoint for our Notus model. It's not part of the end-to-end example we'll see later, but an example of how to connect to a Hugging Face endpoint and a test of the distilabel pipeline.

Let's dive into this quick example of how to use an inference endpoint. We have prepared an easy TextGenerationTask to ask question to the model, in a very similar way as we talk with the LLMs using chatbots. First, we define a class for the question-answering task, with functions showing distilabel how the model should generate the prompts, parse the input and the output, etc.

class QuestionAnsweringTask(TextGenerationTask):
    def generate_prompt(self, question: str) -> str:
        return Prompt(
            system_prompt=self.system_prompt,
            formatted_prompt=question,
        ).format_as(
            "llama2"
        )  # type: ignore

    def parse_output(self, output: str) -> Dict[str, str]:
        return {"answer": output.strip()}

    @property
    def input_args_names(self) -> list[str]:
        return ["question"]

    @property
    def output_args_names(self) -> list[str]:
        return ["answer"]

llm is an object of the InferenceEndpointsLLM class, and by using it we can start generating answers to question using the llm.generate() method.

llm = InferenceEndpointsLLM(
    endpoint_name_or_model_id=os.getenv("HF_INFERENCE_ENDPOINT_NAME"),  # type: ignore
    endpoint_namespace=os.getenv("HF_NAMESPACE"),  # type: ignore
    token=os.getenv("HF_TOKEN") or None,
    task=QuestionAnsweringTask(),
)

With the InferenceEndpointsLLM object defined with the endpoint information and the Task, we can go ahead and start generating text. Let's ask this LLM what's, for example, the second most populated city in Denmark. The answer should be Aarhus.

generation = llm.generate(
    [{"question": "What's the second most populated city in Denmark?"}]
)
generation[0][0]["parsed_output"]["answer"]
'The second most populated city in Denmark is Aarhus, with a population of around 340,000 people. It is located on the east coast of Jutland, and is known for its vibrant cultural scene, beautiful beaches, and historic landmarks. Aarhus is also home to Aarhus University, one of the largest universities in Scandinavia.'

The endpoint is working correctly! We have succesfully set up a custom generating task for a distilabel pipeline.

Creating a RAG pipeline using Haystack for the European AI Act

For this end-to-end example, we would like to create an expert model capable of answering question and filling up information about the new AI Act promoted by the European Union, which is the first regulation on artificial intelligence. As part of its digital strategy, the EU wants to regulate artificial AI to ensure better conditions for the development and use of this innovative technology. This act is a regulatory framework for AI, with different risk levels meaning more or less regulation. They are the world's first rules on AI.

This RAG pipeline that we want to create downloads the PDF file, converts it to plain text and preprocess it, creating batches that we can feed distilabel to start creating instructions from it. Let's see this first part of the pipeline and get the input data. Note that this RAG part of the pipeline is not based on an active pipeline based queries or semantic properties, but a more brute-force approach in which we download the PDF and preprocess its contents.

Downloading the AI Act PDF

Firstly, we need to download the PDF document itself. We'll place it in our working directory, if it's not there already.

%%bash

if [ ! -f "The-AI-Act.pdf" ]; then
    wget -q https://artificialintelligenceact.eu/wp-content/uploads/2021/08/The-AI-Act.pdf
fi

Once we have it in our working directory, we can use Haystack's Converter and Pipeline features to extract the textual data, clean it and divide it in different batches. Afterwards, these batches will be used to start creating synthetic instructions.

# The converter turns the PDF into text we can process easily
converter = PDFToTextConverter(remove_numeric_tables=True, valid_languages=["en"])

# Preprocessing pipelines can have several steps.
# Ours clean empty lines, header, footers and whitespaces
# and split the text into 150-char long batches, respecting
# where the sentences naturally end and begin.
preprocessor = PreProcessor(
    clean_empty_lines=True,
    clean_whitespace=True,
    clean_header_footer=True,
    split_by="word",
    split_length=150,
    split_respect_sentence_boundary=True,
)

doc = converter.convert(file_path="The-AI-Act.pdf", meta=None)[0]
docs = preprocessor.process([doc])
print(f"Documents: 1\nBatches: {len(docs)}")
pdftotext version 4.04 [www.xpdfreader.com]
Copyright 1996-2022 Glyph & Cog, LLC
Preprocessing:   0%|          | 0/1 [00:00<?, ?docs/s]
[01/18/24 09:00:15] WARNING  WARNING:haystack.nodes.preprocessor.preprocessor:We found one or   preprocessor.py:516
                             more sentences whose split count is higher than the split length.                     
Preprocessing: 100%|██████████| 1/1 [00:00<00:00,  5.05docs/s]
Documents: 1
Batches: 355



Let's take a quick look at the batches we just generated.

inputs = [doc.content for doc in docs]
inputs[0][0:500]
'EN EN\nEUROPEAN\nCOMMISSION\nProposal for a\nREGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL\nLAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE\n(ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION\nLEGISLATIVE ACTS\x0cEN\nEXPLANATORY MEMORANDUM\n1. CONTEXT OF THE PROPOSAL\n1.1. Reasons for and objectives of the proposal\nThis explanatory memorandum accompanies the proposal for a Regulation laying down\nharmonised rules on artificial intelligence (Artificial Intelligence Act). Artificial Int'

The document has been correctly batched, from one big document to 355 strings, 150-character long at maximum. This list of strings can now be used as input to generate a instruction dataset using distilabel.

Generating instructions with SelfInstructTask

With out Inference Endpoint up and running, we should be able to generate instructions with distilabel. These instructions, made by the LLM through our endpoint, will form an instruction dataset, with instructions created from the data we just extracted.

For this example, we are using a subset of 50 batches generated in the section above, to be gentle on performance.

instructions_dataset = Dataset.from_dict({"input": inputs[0:50]})

instructions_dataset
Dataset({
    features: ['input'],
    num_rows: 50
})

With the SelfInstructTask class we can generate a Self-Instruct specitification for building the prompts, as done in the Self-Instruct paper. distilabel will start from human-made input, in this case, the batches we created from the AI Act pdf, and it will generate instructions based on it. These instructions can then be reviewed using Argilla to keep the best ones.

An application description can be passed as a parameter to specify the behaviour of the model; we want a model capable of answering our questions about the AI Act.

instructions_task = SelfInstructTask(
    application_description="A assistant that can answer questions about the AI Act made by the European Union."
)

Let's now define a generator, passing the SelfInstructTask object, and create a Pipeline object.

instructions_generator = InferenceEndpointsLLM(
    endpoint_name_or_model_id=os.getenv("HF_INFERENCE_ENDPOINT_NAME"),  # type: ignore
    endpoint_namespace=os.getenv("HF_NAMESPACE"),  # type: ignore
    token=os.getenv("HF_TOKEN") or None,
    task=instructions_task,
)

instructions_pipeline = Pipeline(generator=instructions_generator)

Our pipeline is ready to be used to generate instructions. Let's do it!

generated_instructions = instructions_pipeline.generate(
    dataset=instructions_dataset, num_generations=1, batch_size=8
)

The pipeline has succesfully generated instructions given the topics and the behaviour passed as input. Let's gather all those instructions and see how the look.

instructions = []
for generations in generated_instructions["instructions"]:
    for generation in generations:
        instructions.extend(generation)

print(f"Number of generated instructions: {len(instructions)}")

for instruction in instructions[:5]:
    print(instruction)
Number of generated instructions: 178
What are the reasons for and objectives of the proposal for a Regulation laying down harmonised rules on artificial intelligence?
How can artificial intelligence improve prediction, optimise operations and resource allocation, and personalise service delivery?
What benefits can artificial intelligence bring to the European economy and society as a whole?
How can the use of artificial intelligence support socially and environmentally beneficial outcomes?
What are the high-impact sectors that require AI action according to the AI Act by the European Union?

These initial intructions form our instruction dataset. Following the human-in-the-loop approach, we should push the instructions to Argilla to visualize them and be able to rank them in terms of quality. Those annotations are essential to make quality data, ensuring a better performance of the final model. Nevertheless, this step is optional.

Pushing the instruction dataset to Argilla to visualize and annotate.

Let's take a quick look at the instructions generated by SelfInstructTask.

generated_instructions[0]
{'input': 'EN EN\nEUROPEAN\nCOMMISSION\nProposal for a\nREGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL\nLAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE\n(ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION\nLEGISLATIVE ACTS\x0cEN\nEXPLANATORY MEMORANDUM\n1. CONTEXT OF THE PROPOSAL\n1.1. Reasons for and objectives of the proposal\nThis explanatory memorandum accompanies the proposal for a Regulation laying down\nharmonised rules on artificial intelligence (Artificial Intelligence Act). Artificial Intelligence\n(AI) is a fast evolving family of technologies that can bring a wide array of economic and\nsocietal benefits across the entire spectrum of industries and social activities. By improving\nprediction, optimising operations and resource allocation, and personalising service delivery,\nthe use of artificial intelligence can support socially and environmentally beneficial outcomes\nand provide key competitive advantages to companies and the European economy. ',
 'generation_model': ['argilla/notus-7b-v1'],
 'generation_prompt': ['You are an expert prompt writer, writing the best and most diverse prompts for a variety of tasks. You are given a task description and a set of instructions for how to write the prompts for an specific AI application.\n# Task Description\nDevelop 5 user queries that can be received by the given AI application and applicable to the provided context. Emphasize diversity in verbs and linguistic structures within the model\'s textual capabilities.\n\n# Criteria for Queries\nIncorporate a diverse range of verbs, avoiding repetition.\nEnsure queries are compatible with AI model\'s text generation functions and are limited to 1-2 sentences.\nDesign queries to be self-contained and standalone.\nBlend interrogative (e.g., "What is the significance of x?") and imperative (e.g., "Detail the process of x.") styles.\nWrite each query on a separate line and avoid using numbered lists or bullet points.\n\n# AI Application\nA assistant that can answer questions about the AI Act made by the European Union.\n\n# Context\nEN EN\nEUROPEAN\nCOMMISSION\nProposal for a\nREGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL\nLAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE\n(ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION\nLEGISLATIVE ACTS\x0cEN\nEXPLANATORY MEMORANDUM\n1. CONTEXT OF THE PROPOSAL\n1.1. Reasons for and objectives of the proposal\nThis explanatory memorandum accompanies the proposal for a Regulation laying down\nharmonised rules on artificial intelligence (Artificial Intelligence Act). Artificial Intelligence\n(AI) is a fast evolving family of technologies that can bring a wide array of economic and\nsocietal benefits across the entire spectrum of industries and social activities. By improving\nprediction, optimising operations and resource allocation, and personalising service delivery,\nthe use of artificial intelligence can support socially and environmentally beneficial outcomes\nand provide key competitive advantages to companies and the European economy. \n\n# Output\n'],
 'raw_generation_responses': ['1. What are the reasons for and objectives of the proposal for a Regulation laying down harmonised rules on artificial intelligence?\n2. How can artificial intelligence improve prediction, optimise operations and resource allocation, and personalise service delivery?\n3. What benefits can artificial intelligence bring to the European economy and society as a whole?\n4. How can the use of artificial intelligence support socially and environmentally beneficial outcomes?\n5. What competitive advantages can companies gain from using artificial intelligence?'],
 'instructions': [['What are the reasons for and objectives of the proposal for a Regulation laying down harmonised rules on artificial intelligence?',
   'How can artificial intelligence improve prediction, optimise operations and resource allocation, and personalise service delivery?',
   'What benefits can artificial intelligence bring to the European economy and society as a whole?',
   'How can the use of artificial intelligence support socially and environmentally beneficial outcomes?']]}

For each input, i.e., each batch of the AI Act pdf file, we have a generator prompt, with general guidelines on how to behave, as well as the application description parameter. 4 instructions per input have been generated.

Now it's the perfect time to upload the instruction dataset to Argilla, review it and manually annotate it.

instructions_rg_dataset = generated_instructions.to_argilla()
instructions_rg_dataset[0]
FeedbackRecord(fields={'input': 'EN EN\nEUROPEAN\nCOMMISSION\nProposal for a\nREGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL\nLAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE\n(ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION\nLEGISLATIVE ACTS\x0cEN\nEXPLANATORY MEMORANDUM\n1. CONTEXT OF THE PROPOSAL\n1.1. Reasons for and objectives of the proposal\nThis explanatory memorandum accompanies the proposal for a Regulation laying down\nharmonised rules on artificial intelligence (Artificial Intelligence Act). Artificial Intelligence\n(AI) is a fast evolving family of technologies that can bring a wide array of economic and\nsocietal benefits across the entire spectrum of industries and social activities. By improving\nprediction, optimising operations and resource allocation, and personalising service delivery,\nthe use of artificial intelligence can support socially and environmentally beneficial outcomes\nand provide key competitive advantages to companies and the European economy.', 'instruction': 'What are the reasons for and objectives of the proposal for a Regulation laying down harmonised rules on artificial intelligence?'}, metadata={'length-input': 964, 'length-instruction': 129, 'generation-model': 'argilla/notus-7b-v1'}, vectors={}, responses=[], suggestions=(), external_id=None)
instructions_rg_dataset.push_to_argilla(name=f"notus_AI_instructions")

In the Argilla UI, each tuple input-instruction is visualized individually, and can be individually annotated.

Generate a Preference Dataset using an Ultrafeedback text quality task.

Once we have our instruction dataset, we are going to create a preference dataset through the UltraFeedback text quality task. This is a type of task used in NLP used to evaluate the quality of text generated; our goal is to provide detailed feedback on the quality of the generated text, beyond a binary label.

Our pipeline() method allows us to create a Pipeline instance with the provided LLMs for a given task, which is useful whenever you want to use a pre-defined or custom Pipeline for a given task. We will specify our task and subtask, the generator we want to use (in this case, one based in a Text Generator Task) and our OpenAI API key.

preference_pipeline = pipeline(
    "preference",
    "instruction-following",
    generator=InferenceEndpointsLLM(
        endpoint_name_or_model_id=os.getenv("HF_INFERENCE_ENDPOINT_NAME"),  # type: ignore
        endpoint_namespace=os.getenv("HF_NAMESPACE", None),
        task=TextGenerationTask(),
        max_new_tokens=256,
        num_threads=2,
        temperature=0.3,
    ),
    max_new_tokens=256,
    num_threads=2,
    api_key=os.getenv("OPENAI_API_KEY", None),
    temperature=0.0,
)

We also need to retrieve our instruction dataset from Argilla, as it will be the input of this pipeline.

remote_dataset = rg.FeedbackDataset.from_argilla(
    "notus_AI_instructions", workspace="admin"
)
instructions_dataset = remote_dataset.pull(max_records=100)  # get first 100 records

instructions_dataset = instructions_dataset.format_as("datasets")
instructions_dataset
Dataset({
    features: ['input', 'instruction', 'instruction-rating', 'instruction-rating-suggestion', 'instruction-rating-suggestion-metadata', 'external_id', 'metadata'],
    num_rows: 100
})
instructions_dataset[0]
{'input': 'EN EN\nEUROPEAN\nCOMMISSION\nProposal for a\nREGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL\nLAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE\n(ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION\nLEGISLATIVE ACTS\x0cEN\nEXPLANATORY MEMORANDUM\n1. CONTEXT OF THE PROPOSAL\n1.1. Reasons for and objectives of the proposal\nThis explanatory memorandum accompanies the proposal for a Regulation laying down\nharmonised rules on artificial intelligence (Artificial Intelligence Act). Artificial Intelligence\n(AI) is a fast evolving family of technologies that can bring a wide array of economic and\nsocietal benefits across the entire spectrum of industries and social activities. By improving\nprediction, optimising operations and resource allocation, and personalising service delivery,\nthe use of artificial intelligence can support socially and environmentally beneficial outcomes\nand provide key competitive advantages to companies and the European economy.',
 'instruction': 'What are the reasons for and objectives of the proposal for a Regulation laying down harmonised rules on artificial intelligence?',
 'instruction-rating': [],
 'instruction-rating-suggestion': None,
 'instruction-rating-suggestion-metadata': {'type': None,
  'score': None,
  'agent': None},
 'external_id': None,
 'metadata': '{"length-input": 964, "length-instruction": 129, "generation-model": "argilla/notus-7b-v1"}'}

Before generating the text based on our instructions, we need to mingle a little bit with the dataset. From the previous section, we still have our old input, the batches from the PDF. We have to change that to the instructions that we generated.

instructions_dataset = instructions_dataset.rename_columns({"input": "context", "instruction": "input"})

Now, let's build a dataset by using the pipeline we just created, and the topics from which our instructions were generated.

preference_dataset = preference_pipeline.generate(
    instructions_dataset,  # type: ignore
    num_generations=2,
    batch_size=8,
    display_progress_bar=True,
)

Let's take a look at an instance of the preference dataset

preference_dataset[0]
{'context': 'EN EN\nEUROPEAN\nCOMMISSION\nProposal for a\nREGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL\nLAYING DOWN HARMONISED RULES ON ARTIFICIAL INTELLIGENCE\n(ARTIFICIAL INTELLIGENCE ACT) AND AMENDING CERTAIN UNION\nLEGISLATIVE ACTS\x0cEN\nEXPLANATORY MEMORANDUM\n1. CONTEXT OF THE PROPOSAL\n1.1. Reasons for and objectives of the proposal\nThis explanatory memorandum accompanies the proposal for a Regulation laying down\nharmonised rules on artificial intelligence (Artificial Intelligence Act). Artificial Intelligence\n(AI) is a fast evolving family of technologies that can bring a wide array of economic and\nsocietal benefits across the entire spectrum of industries and social activities. By improving\nprediction, optimising operations and resource allocation, and personalising service delivery,\nthe use of artificial intelligence can support socially and environmentally beneficial outcomes\nand provide key competitive advantages to companies and the European economy.',
 'input': 'What are the reasons for and objectives of the proposal for a Regulation laying down harmonised rules on artificial intelligence?',
 'instruction-rating': [],
 'instruction-rating-suggestion': None,
 'instruction-rating-suggestion-metadata': {'agent': None,
  'score': None,
  'type': None},
 'external_id': None,
 'metadata': '{"length-input": 964, "length-instruction": 129, "generation-model": "argilla/notus-7b-v1"}',
 'generation_model': ['argilla/notus-7b-v1', 'argilla/notus-7b-v1'],
 'generation_prompt': ["You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\nWhat are the reasons for and objectives of the proposal for a Regulation laying down harmonised rules on artificial intelligence?",
  "You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content. Please ensure that your responses are socially unbiased and positive in nature.\nIf a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.\nWhat are the reasons for and objectives of the proposal for a Regulation laying down harmonised rules on artificial intelligence?"],
 'raw_generation_responses': ["\nThe proposal for a Regulation laying down harmonised rules on artificial intelligence (AI) aims to ensure the trustworthy use of AI in the EU. It seeks to create a single market for AI applications and services, while ensuring that they are safe and respect fundamental rights. The proposal is part of the EU's broader strategy on AI, which aims to put the EU at the forefront of global AI development and deployment.\nThe objectives of the proposal are to:\n\n1. Ensure that AI systems are designed, developed, and deployed in a way that respects fundamental rights and values, including human dignity, freedom, and privacy.\n2. Ensure that AI systems are safe and secure, and do not pose unacceptable risks to people, property, or the environment.\n3. Ensure that AI systems are robust, reliable, and accurate, and can be trusted to deliver the intended functionality.\n4. Ensure that AI systems are traceable, meaning that it is possible to track how they work and how they make decisions.\n5. Ensure that AI systems are transparent, meaning that it is possible to understand how they work and how they make decisions.\n6. Ensure that AI systems are fair, meaning that they do not discriminate against individuals",
  '\nThe proposal for a Regulation laying down harmonised rules on artificial intelligence (AI) aims to ensure a high level of safety and security of AI systems and to establish a horizontal and technology-neutral framework for AI applications. This will help to create a single market for AI and to ensure that AI systems are developed and deployed in a responsible manner. The proposal will also help to strengthen the competitiveness of the EU industry in the global AI market.\nThe objectives of the proposal are:\n1. To ensure that AI systems are safe and secure by establishing a risk-based framework for the development, placement on the market and use of AI systems.\n2. To establish a horizontal and technology-neutral framework for AI applications that is applicable to all sectors and types of AI systems.\n3. To ensure that AI systems are developed and deployed in a responsible manner by establishing requirements for transparency, robustness, security, accuracy, controllability and privacy protection.\n4. To create a single market for AI by ensuring that AI systems are developed and deployed in a harmonised manner across the EU.\n5. To strengthen the competitiveness of the EU industry in the global AI market by creating a level playing field for businesses and by promoting the'],
 'generations': ["\nThe proposal for a Regulation laying down harmonised rules on artificial intelligence (AI) aims to ensure the trustworthy use of AI in the EU. It seeks to create a single market for AI applications and services, while ensuring that they are safe and respect fundamental rights. The proposal is part of the EU's broader strategy on AI, which aims to put the EU at the forefront of global AI development and deployment.\nThe objectives of the proposal are to:\n\n1. Ensure that AI systems are designed, developed, and deployed in a way that respects fundamental rights and values, including human dignity, freedom, and privacy.\n2. Ensure that AI systems are safe and secure, and do not pose unacceptable risks to people, property, or the environment.\n3. Ensure that AI systems are robust, reliable, and accurate, and can be trusted to deliver the intended functionality.\n4. Ensure that AI systems are traceable, meaning that it is possible to track how they work and how they make decisions.\n5. Ensure that AI systems are transparent, meaning that it is possible to understand how they work and how they make decisions.\n6. Ensure that AI systems are fair, meaning that they do not discriminate against individuals",
  '\nThe proposal for a Regulation laying down harmonised rules on artificial intelligence (AI) aims to ensure a high level of safety and security of AI systems and to establish a horizontal and technology-neutral framework for AI applications. This will help to create a single market for AI and to ensure that AI systems are developed and deployed in a responsible manner. The proposal will also help to strengthen the competitiveness of the EU industry in the global AI market.\nThe objectives of the proposal are:\n1. To ensure that AI systems are safe and secure by establishing a risk-based framework for the development, placement on the market and use of AI systems.\n2. To establish a horizontal and technology-neutral framework for AI applications that is applicable to all sectors and types of AI systems.\n3. To ensure that AI systems are developed and deployed in a responsible manner by establishing requirements for transparency, robustness, security, accuracy, controllability and privacy protection.\n4. To create a single market for AI by ensuring that AI systems are developed and deployed in a harmonised manner across the EU.\n5. To strengthen the competitiveness of the EU industry in the global AI market by creating a level playing field for businesses and by promoting the'],
 'labelling_model': 'gpt-3.5-turbo',
 'labelling_prompt': [{'content': 'Your role is to evaluate text quality based on given criteria.',
   'role': 'system'},
  {'content': "\n# Instruction Following Assessment\nEvaluate alignment between output and intent. Assess understanding of task goal and restrictions.\n**Instruction Components**: Task Goal (intended outcome), Restrictions (text styles, formats, or designated methods, etc).\n\n**Scoring**: Rate outputs 1 to 5:\n\n1. **Irrelevant**: No alignment.\n2. **Partial Focus**: Addresses one aspect poorly.\n3. **Partial Compliance**:\n\t- (1) Meets goal or restrictions, neglecting other.\n\t- (2) Acknowledges both but slight deviations.\n4. **Almost There**: Near alignment, minor deviations.\n5. **Comprehensive Compliance**: Fully aligns, meets all requirements.\n\n---\n\n## Format\n\n### Input\nInstruction: [Specify task goal and restrictions]\n\nTexts:\n\n<text 1> [Text 1]\n<text 2> [Text 2]\n\n### Output\n\n#### Output for Text 1\nRating: [Rating for text 1]\nRationale: [Rationale for the rating in short sentences]\n\n#### Output for Text 2\nRating: [Rating for text 2]\nRationale: [Rationale for the rating in short sentences]\n\n---\n\n## Annotation\n\n### Input\nInstruction: What are the reasons for and objectives of the proposal for a Regulation laying down harmonised rules on artificial intelligence?\n\nTexts:\n\n<text 1> \nThe proposal for a Regulation laying down harmonised rules on artificial intelligence (AI) aims to ensure the trustworthy use of AI in the EU. It seeks to create a single market for AI applications and services, while ensuring that they are safe and respect fundamental rights. The proposal is part of the EU's broader strategy on AI, which aims to put the EU at the forefront of global AI development and deployment.\nThe objectives of the proposal are to:\n\n1. Ensure that AI systems are designed, developed, and deployed in a way that respects fundamental rights and values, including human dignity, freedom, and privacy.\n2. Ensure that AI systems are safe and secure, and do not pose unacceptable risks to people, property, or the environment.\n3. Ensure that AI systems are robust, reliable, and accurate, and can be trusted to deliver the intended functionality.\n4. Ensure that AI systems are traceable, meaning that it is possible to track how they work and how they make decisions.\n5. Ensure that AI systems are transparent, meaning that it is possible to understand how they work and how they make decisions.\n6. Ensure that AI systems are fair, meaning that they do not discriminate against individuals\n<text 2> \nThe proposal for a Regulation laying down harmonised rules on artificial intelligence (AI) aims to ensure a high level of safety and security of AI systems and to establish a horizontal and technology-neutral framework for AI applications. This will help to create a single market for AI and to ensure that AI systems are developed and deployed in a responsible manner. The proposal will also help to strengthen the competitiveness of the EU industry in the global AI market.\nThe objectives of the proposal are:\n1. To ensure that AI systems are safe and secure by establishing a risk-based framework for the development, placement on the market and use of AI systems.\n2. To establish a horizontal and technology-neutral framework for AI applications that is applicable to all sectors and types of AI systems.\n3. To ensure that AI systems are developed and deployed in a responsible manner by establishing requirements for transparency, robustness, security, accuracy, controllability and privacy protection.\n4. To create a single market for AI by ensuring that AI systems are developed and deployed in a harmonised manner across the EU.\n5. To strengthen the competitiveness of the EU industry in the global AI market by creating a level playing field for businesses and by promoting the\n\n### Output ",
   'role': 'user'}],
 'raw_labelling_response': '#### Output for Text 1\nRating: 5\nRationale: The text fully aligns with the task goal and restrictions. It clearly states the reasons for and objectives of the proposal for a Regulation laying down harmonised rules on artificial intelligence, including ensuring the trustworthy use of AI, creating a single market for AI applications and services, and ensuring safety, respect for fundamental rights, robustness, transparency, and fairness of AI systems.\n\n#### Output for Text 2\nRating: 4\nRationale: The text mostly aligns with the task goal and restrictions. It addresses the reasons for and objectives of the proposal for a Regulation laying down harmonised rules on artificial intelligence, including ensuring safety and security of AI systems, establishing a horizontal and technology-neutral framework, promoting responsible development and deployment of AI systems, creating a single market for AI, and strengthening the competitiveness of the EU industry in the global AI market. However, it does not explicitly mention the need to respect fundamental rights, accuracy of AI systems, and traceability of AI systems, which are mentioned in the task goal and restrictions.',
 'rating': [5.0, 4.0],
 'rationale': ['The text fully aligns with the task goal and restrictions. It clearly states the reasons for and objectives of the proposal for a Regulation laying down harmonised rules on artificial intelligence, including ensuring the trustworthy use of AI, creating a single market for AI applications and services, and ensuring safety, respect for fundamental rights, robustness, transparency, and fairness of AI systems.',
  'The text mostly aligns with the task goal and restrictions. It addresses the reasons for and objectives of the proposal for a Regulation laying down harmonised rules on artificial intelligence, including ensuring safety and security of AI systems, establishing a horizontal and technology-neutral framework, promoting responsible development and deployment of AI systems, creating a single market for AI, and strengthening the competitiveness of the EU industry in the global AI market. However, it does not explicitly mention the need to respect fundamental rights, accuracy of AI systems, and traceability of AI systems, which are mentioned in the task goal and restrictions.']}

Human Feedback with Argilla

You can use the AI Feedback created by distilabel directly but we hae ve seen that enhancing it with human feedback will improve the quality of your LLM. We provide a to_argilla method which creates a dataset for Argilla along with out-of-the-box tailored metadata filters and semantic search to allow you to provide human feedback as quickly and engaging as possible. You can check the Argilla docs to get it up and running.

First, install it.

!pip install "distilabel[argilla]"

If you are running Argilla using the Docker quickstart image or Hugging Face Spaces, you need to init the Argilla client with the URL and API_KEY:

import argilla as rg

# Replace api_url with the url to your HF Spaces URL if using Spaces
# Replace api_key if you configured a custom API key
rg.init(
    api_url="http://localhost:6900",
    api_key="owner.apikey",
    workspace="admin"
)

Once our preference dataset has been correctly generated, the Argilla UI is the best tool at our disposal to visualize it and annotate it. As for the instruction dataset, we just have to convert it to an Argilla Feedback Dataset, and push it to Argilla.

# Uploading the Preference Dataset
preference_rg_dataset = preference_dataset.to_argilla()

# Adding the context as a metadata property in the new Feedback dataset, as this
# information will be useful later.
for record_feedback, record_huggingface in zip(
    preference_rg_dataset, preference_dataset
):
    record_feedback.metadata["context"] = record_huggingface["context"]

preference_rg_dataset.push_to_argilla(name=f"notus_AI_preference")

In the Argilla UI, we can see the input (an instruction), and the two generations that the LLM created out of it.

Conclusions

To conclude, we have gone through an end-to-end example of distilabel. We've set up an Inference Endpoint, defined a distilabel pipeline that extracts information from a PDF, created and manually reviewed the instruction and preference dataset created from that input. The final preference dataset is perfect for fine-tuning, and you can easily do this using the ArgillaTrainer from Argilla. Have a look at these resources if you want to go further: