Dataset Viewer
The dataset viewer is not available for this subset.
Cannot get the split names for the config 'latest' of the dataset.
Exception:    HfHubHTTPError
Message:      500 Server Error: Internal Server Error for url: https://huggingface.co/api/datasets/AgentPublic/cnil/paths-info/43d3ab3359d1778f2a624b045746f8c003e1065a (Request ID: Root=1-69fe32e1-7e3b8330261f62c86bab1bab;b036167e-c890-4b1f-b7cf-54ac06474be8)

Internal Error - We're working hard to fix this as soon as possible!
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
                  for split in get_dataset_split_names(
                               ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
                  info = get_dataset_config_info(
                         ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 268, in get_dataset_config_info
                  builder = load_dataset_builder(
                            ^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1315, in load_dataset_builder
                  dataset_module = dataset_module_factory(
                                   ^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1207, in dataset_module_factory
                  raise e1 from None
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1182, in dataset_module_factory
                  ).get_module()
                    ^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 639, in get_module
                  data_files = DataFilesDict.from_patterns(
                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/data_files.py", line 723, in from_patterns
                  else DataFilesList.from_patterns(
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/data_files.py", line 616, in from_patterns
                  resolve_pattern(
                File "/usr/local/lib/python3.12/site-packages/datasets/data_files.py", line 372, in resolve_pattern
                  for filepath, info in fs.glob(fs_pattern, detail=True, **glob_kwargs).items():
                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 521, in glob
                  return super().glob(path, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/fsspec/spec.py", line 604, in glob
                  allpaths = self.find(root, maxdepth=depth, withdirs=True, detail=True, **kwargs)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 556, in find
                  return super().find(
                         ^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/fsspec/spec.py", line 495, in find
                  out[path] = self.info(path)
                              ^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_file_system.py", line 719, in info
                  paths_info = self._api.get_paths_info(
                               ^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
                  return fn(*args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/huggingface_hub/hf_api.py", line 3371, in get_paths_info
                  hf_raise_for_status(response)
                File "/usr/local/lib/python3.12/site-packages/huggingface_hub/utils/_http.py", line 482, in hf_raise_for_status
                  raise _format(HfHubHTTPError, str(e), response) from e
              huggingface_hub.errors.HfHubHTTPError: 500 Server Error: Internal Server Error for url: https://huggingface.co/api/datasets/AgentPublic/cnil/paths-info/43d3ab3359d1778f2a624b045746f8c003e1065a (Request ID: Root=1-69fe32e1-7e3b8330261f62c86bab1bab;b036167e-c890-4b1f-b7cf-54ac06474be8)
              
              Internal Error - We're working hard to fix this as soon as possible!

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.


πŸ“’ Sondage 2026 : Utilisation des datasets publiques de MediaTech

Vous utilisez ce dataset ou d’autres datasets de notre collection MediaTech ? Votre avis compte ! Aidez-nous Γ  amΓ©liorer nos datasets publiques en rΓ©pondant Γ  ce sondage rapide (5 min) : πŸ‘‰ https://grist.numerique.gouv.fr/o/albert/forms/gF4hLaq9VvUog6c5aVDuMw/11 Merci pour votre contribution ! πŸ™Œ


πŸ‡«πŸ‡· CNIL Deliberations Dataset

This dataset is a processed and embedded version of the official deliberations and decisions published by the CNIL (Commission Nationale de l’Informatique et des LibertΓ©s), the French data protection authority.
It includes a variety of legal documents such as opinions, recommendations, simplified norms, general authorizations, and formal decisions.

The original data is downloaded from the dedicated DILA open data repository and the dataset is also available in data.gouv.fr (Les dΓ©libΓ©rations de la CNIL) . The dataset provides semantic-ready, structured and chunked data making the dataset suitable for semantic search, AI legal assistants, or RAG pipelines for example. These chunks have then been embedded using the BAAI/bge-m3 model.


πŸ—‚οΈ Dataset Contents

The dataset is provided in Parquet format and includes the following columns:

Column Name Type Description
chunk_id str Unique identifier for each chunk.
doc_id str Document identifier of the deliberation.
chunk_index int Index of the chunk within the same deliberation document. Starting from 1.
chunk_xxh64 str XXH64 hash of the chunk_text value.
nature str Type of act (e.g., deliberation, decision...).
status str Status of the document (e.g., vigueur, vigueur_diff).
nature_delib str Specific nature of the deliberation.
title str Title of the deliberation or decision.
full_title str Full title of the deliberation or decision.
number str Official reference number.
date str Date of publication (format: YYYY-MM-DD).
text str Raw text content of the chunk extracted from the deliberation or decision
chunk_text str Formatted text chunk used for embedding (includes title + content).
embeddings_bge-m3 str Embedding vector of chunk_text using BAAI/bge-m3, stored as JSON string.

πŸ› οΈ Data Processing Methodology

1. πŸ“₯ Field Extraction

Data was extracted from the the dedicated DILA open data repository.
The following transformations were applied:

  • Basic fields: doc_id (cid), title, full_title, number, date, nature, status, nature_delib, were taken directly from the source XML file.
  • Generated fields:
    • chunk_id: a generated unique identifier combining the doc_id and chunk_index.
    • chunk_index: is the index of the chunk of a same deliberation document. Each document has an unique doc_id.
    • chunk_xxh64: is the xxh64 hash of the chunk_text value. It is useful to determine if the chunk_text value has changed from a version to another.
  • Textual fields:
    • text: Chunk of the main text content.
    • chunk_text: Combines title and the main text body to maximize embedding relevance.

2. βœ‚οΈ Text Chunking

The value includes the title and the textual content chunk text. This strategy is designed to improve semantic search for document search use cases on administrative procedures.

The Langchain's RecursiveCharacterTextSplitter function was used to make these chunks (text value). The parameters used are :

  • chunk_size = 1500
  • chunk_overlap = 200
  • length_function = len

🧠 3. Embeddings Generation

Each chunk_text was embedded using the BAAI/bge-m3 model. The resulting embedding vector is stored in the embeddings_bge-m3 column as a string, but can easily be parsed back into a list[float] or NumPy array.

πŸŽ“ Tutorials

πŸ”„ 1. The chunking doesn't fit your use case?

If you need to reconstitute the original, un-chunked dataset, you can follow this tutorial notebook available on our GitHub repository.

⚠️ The tutorial is only relevant for datasets that were chunked without overlap.

πŸ€– 2. How to load MediaTech's datasets from Hugging Face and use them in a RAG pipeline ?

To learn how to load MediaTech's datasets from Hugging Face and integrate them into a Retrieval-Augmented Generation (RAG) pipeline, check out our step-by-step RAG tutorial available on our GitHub repository !

πŸ“Œ 3. Embedding Use Notice

⚠️ The embeddings_bge-m3 column is stored as a stringified list of floats (e.g., "[-0.03062629,-0.017049594,...]"). To use it as a vector, you need to parse it into a list of floats or NumPy array.

Using the datasets library:

import pandas as pd
import json
from datasets import load_dataset
# The Pyarrow library must be installed in your Python environment for this example. By doing => pip install pyarrow

dataset = load_dataset("AgentPublic/cnil")
df = pd.DataFrame(dataset['train'])
df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)

Using downloaded local Parquet files:

import pandas as pd
import json
# The Pyarrow library must be installed in your Python environment for this example. By doing => pip install pyarrow

df = pd.read_parquet(path="cnil-latest/") # Assuming that all parquet files are located into this folder
df["embeddings_bge-m3"] = df["embeddings_bge-m3"].apply(json.loads)

You can then use the dataframe as you wish, such as by inserting the data from the dataframe into the vector database of your choice.

🐱 GitHub repository :

The project MediaTech is open source ! You are free to contribute or see the complete code used to build the dataset by checking the GitHub repository

πŸ“š Source & License

πŸ”— Source :

πŸ“„ Licence :

Open License (Etalab) β€” This dataset is publicly available and can be reused under the conditions of the Etalab open license.

Downloads last month
295

Collection including AgentPublic/cnil