url stringlengths 58 61 | repository_url stringclasses 1
value | labels_url stringlengths 72 75 | comments_url stringlengths 67 70 | events_url stringlengths 65 68 | html_url stringlengths 48 51 | id int64 600M 3.67B | node_id stringlengths 18 24 | number int64 2 7.88k | title stringlengths 1 290 | user dict | labels listlengths 0 4 | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees listlengths 0 4 | comments listlengths 0 30 | created_at timestamp[s]date 2020-04-14 18:18:51 2025-11-26 16:16:56 | updated_at timestamp[s]date 2020-04-29 09:23:05 2025-11-30 03:52:07 | closed_at timestamp[s]date 2020-04-29 09:23:05 2025-11-21 12:31:19 ⌀ | author_association stringclasses 4
values | type null | active_lock_reason null | draft null | pull_request null | body stringlengths 0 228k ⌀ | closed_by dict | reactions dict | timeline_url stringlengths 67 70 | performed_via_github_app null | state_reason stringclasses 4
values | sub_issues_summary dict | issue_dependencies_summary dict | is_pull_request bool 1
class | closed_at_time_taken duration[s] |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/7883 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7883/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7883/comments | https://api.github.com/repos/huggingface/datasets/issues/7883/events | https://github.com/huggingface/datasets/issues/7883 | 3,668,182,561 | I_kwDODunzps7apAYh | 7,883 | Data.to_csv() cannot be recognized by pylance | {
"avatar_url": "https://avatars.githubusercontent.com/u/154290630?v=4",
"events_url": "https://api.github.com/users/xi4ngxin/events{/privacy}",
"followers_url": "https://api.github.com/users/xi4ngxin/followers",
"following_url": "https://api.github.com/users/xi4ngxin/following{/other_user}",
"gists_url": "ht... | [] | open | false | null | [] | [] | 2025-11-26T16:16:56 | 2025-11-26T16:16:56 | null | NONE | null | null | null | null | ### Describe the bug
Hi, everyone ! I am a beginner with datasets.
I am testing reading multiple CSV files from a zip archive. The result of reading the dataset shows success, and it can ultimately be correctly saved to CSV.
Intermediate results:
```
Generating train split: 62973 examples [00:00, 175939.01 examples/... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7883/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7883/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7882 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7882/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7882/comments | https://api.github.com/repos/huggingface/datasets/issues/7882/events | https://github.com/huggingface/datasets/issues/7882 | 3,667,664,527 | I_kwDODunzps7anB6P | 7,882 | Inconsistent loading of LFS-hosted files in epfml/FineWeb-HQ dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/6270922?v=4",
"events_url": "https://api.github.com/users/Oligou/events{/privacy}",
"followers_url": "https://api.github.com/users/Oligou/followers",
"following_url": "https://api.github.com/users/Oligou/following{/other_user}",
"gists_url": "https://ap... | [] | open | false | null | [] | [] | 2025-11-26T14:06:02 | 2025-11-26T14:06:02 | null | NONE | null | null | null | null | ### Describe the bug
Some files in the `epfml/FineWeb-HQ` dataset fail to load via the Hugging Face `datasets` library.
- xet-hosted files load fine
- LFS-hosted files sometimes fail
Example:
- Fails: https://huggingface.co/datasets/epfml/FineWeb-HQ/blob/main/data/CC-MAIN-2024-26/000_00003.parquet
- Works: ht... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7882/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7882/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7880 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7880/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7880/comments | https://api.github.com/repos/huggingface/datasets/issues/7880/events | https://github.com/huggingface/datasets/issues/7880 | 3,667,561,864 | I_kwDODunzps7amo2I | 7,880 | Spurious label column created when audiofolder/imagefolder directories match split names | {
"avatar_url": "https://avatars.githubusercontent.com/u/132138786?v=4",
"events_url": "https://api.github.com/users/neha222222/events{/privacy}",
"followers_url": "https://api.github.com/users/neha222222/followers",
"following_url": "https://api.github.com/users/neha222222/following{/other_user}",
"gists_url... | [] | open | false | null | [] | [] | 2025-11-26T13:36:24 | 2025-11-26T13:36:24 | null | NONE | null | null | null | null | ## Describe the bug
When using `audiofolder` or `imagefolder` with directories for **splits** (train/test) rather than class labels, a spurious `label` column is incorrectly created.
**Example:** https://huggingface.co/datasets/datasets-examples/doc-audio-4
```
from datasets import load_dataset
ds = load_dataset("dat... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7880/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7880/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7879 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7879/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7879/comments | https://api.github.com/repos/huggingface/datasets/issues/7879/events | https://github.com/huggingface/datasets/issues/7879 | 3,657,249,446 | I_kwDODunzps7Z_TKm | 7,879 | python core dump when downloading dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/5960219?v=4",
"events_url": "https://api.github.com/users/hansewetz/events{/privacy}",
"followers_url": "https://api.github.com/users/hansewetz/followers",
"following_url": "https://api.github.com/users/hansewetz/following{/other_user}",
"gists_url": "h... | [] | open | false | null | [] | [
"Hi @hansewetz I'm curious, for me it works just fine. Are you still observing the issue?",
"Yup ... still the same issue.\nHowever, after adding a ```sleep(1)``` call after the ``` for``` loop by accident during debugging, the program terminates properly (not a good solution though ... :-) ).\nAre there some thr... | 2025-11-24T06:22:53 | 2025-11-25T20:45:55 | null | NONE | null | null | null | null | ### Describe the bug
When downloading a dataset in streamed mode and exiting the program before the download completes, the python program core dumps when exiting:
```
terminate called without an active exception
Aborted (core dumped)
```
Tested with python 3.12.3, python 3.9.21
### Steps to reproduce the bug
Cr... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7879/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7879/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7877 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7877/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7877/comments | https://api.github.com/repos/huggingface/datasets/issues/7877/events | https://github.com/huggingface/datasets/issues/7877 | 3,652,906,788 | I_kwDODunzps7Zuu8k | 7,877 | work around `tempfile` silently ignoring `TMPDIR` if the dir doesn't exist | {
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://a... | [] | open | false | null | [] | [
"Hi! Just created a Pull Request (#7890) to try to fix this using your suggestions. I hope it helps!"
] | 2025-11-21T19:51:48 | 2025-11-29T20:37:42 | null | CONTRIBUTOR | null | null | null | null | This should help a lot of users running into `No space left on device` while using `datasets`. Normally the issue is is that `/tmp` is too small and the user needs to use another path, which they would normally set as `export TMPDIR=/some/big/storage`
However, the `tempfile` facility that `datasets` and `pyarrow` use ... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7877/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7877/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7872 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7872/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7872/comments | https://api.github.com/repos/huggingface/datasets/issues/7872/events | https://github.com/huggingface/datasets/issues/7872 | 3,643,681,893 | I_kwDODunzps7ZLixl | 7,872 | IterableDataset does not use features information in to_pandas | {
"avatar_url": "https://avatars.githubusercontent.com/u/790640?v=4",
"events_url": "https://api.github.com/users/bonext/events{/privacy}",
"followers_url": "https://api.github.com/users/bonext/followers",
"following_url": "https://api.github.com/users/bonext/following{/other_user}",
"gists_url": "https://api... | [] | open | false | null | [] | [
"Created A PR!",
"Another test script that can be used to test the behavior - \n\n```\nimport datasets\nfrom datasets import features\n\ndef test_crash():\n common_features = features.Features({\n \"a\": features.Value(\"int64\"),\n \"b\": features.List({\"c\": features.Value(\"int64\")}),\n }... | 2025-11-19T17:12:59 | 2025-11-19T18:52:14 | null | NONE | null | null | null | null | ### Describe the bug
`IterableDataset` created from generator with explicit `features=` parameter seems to ignore provided features description for certain operations, e.g. `.to_pandas(...)` when data coming from the generator has missing values.
### Steps to reproduce the bug
```python
import datasets
from datasets... | null | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7872/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7872/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7871 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7871/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7871/comments | https://api.github.com/repos/huggingface/datasets/issues/7871/events | https://github.com/huggingface/datasets/issues/7871 | 3,643,607,371 | I_kwDODunzps7ZLQlL | 7,871 | Reqwest Error: HTTP status client error (429 Too Many Requests) | {
"avatar_url": "https://avatars.githubusercontent.com/u/26405281?v=4",
"events_url": "https://api.github.com/users/yanan1116/events{/privacy}",
"followers_url": "https://api.github.com/users/yanan1116/followers",
"following_url": "https://api.github.com/users/yanan1116/following{/other_user}",
"gists_url": "... | [] | open | false | null | [] | [
"the dataset repo: `https://huggingface.co/datasets/nvidia/PhysicalAI-Robotics-GR00T-X-Embodiment-Sim`",
"Hi @yanan1116,\n\nThanks for the detailed report! However, this issue was filed in the wrong repository. This is a `huggingface_hub` issue, not a `datasets` issue.\n\nLooking at your traceback, you're using t... | 2025-11-19T16:52:24 | 2025-11-30T03:32:00 | null | NONE | null | null | null | null | ### Describe the bug
full error message:
```
Traceback (most recent call last):
File "/home/yanan/miniconda3/bin/hf", line 7, in <module>
sys.exit(main())
~~~~^^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/cli/hf.py", line 56, in main
app()
~~~^^
File "/ho... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7871/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7871/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7870 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7870/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7870/comments | https://api.github.com/repos/huggingface/datasets/issues/7870/events | https://github.com/huggingface/datasets/issues/7870 | 3,642,209,953 | I_kwDODunzps7ZF7ah | 7,870 | Visualization for Medical Imaging Datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4",
"events_url": "https://api.github.com/users/CloseChoice/events{/privacy}",
"followers_url": "https://api.github.com/users/CloseChoice/followers",
"following_url": "https://api.github.com/users/CloseChoice/following{/other_user}",
"gists_u... | [] | closed | false | null | [] | [
"It would be amazing to be able to show the Papaya UI in google colab / jupyter notebook. IIRC both allow serving javascript via nbextensions that we can surely use in HTML() objects.\n\nAlternatively we could also start with a simple approach and dump the medical image data as a video file that goes through the sl... | 2025-11-19T11:05:39 | 2025-11-21T12:31:19 | 2025-11-21T12:31:19 | CONTRIBUTOR | null | null | null | null | This is a followup to: https://github.com/huggingface/datasets/pull/7815.
I checked the possibilities to visualize the nifti (and potentially dicom), and here's what I found:
- https://github.com/aces/brainbrowser, AGPL3 license, last commit 3 months ago, latest (github) release from 2017. It's available on jsdelivr... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7870/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7870/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 2 days, 1:25:40 |
https://api.github.com/repos/huggingface/datasets/issues/7869 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7869/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7869/comments | https://api.github.com/repos/huggingface/datasets/issues/7869/events | https://github.com/huggingface/datasets/issues/7869 | 3,636,808,734 | I_kwDODunzps7YxUwe | 7,869 | Why does dataset merge fail when tools have different parameters? | {
"avatar_url": "https://avatars.githubusercontent.com/u/116297296?v=4",
"events_url": "https://api.github.com/users/hitszxs/events{/privacy}",
"followers_url": "https://api.github.com/users/hitszxs/followers",
"following_url": "https://api.github.com/users/hitszxs/following{/other_user}",
"gists_url": "https... | [] | open | false | null | [] | [
"Hi @hitszxs,\n This is indeed by design,\n\nThe `datasets` library is built on top of [Apache Arrow](https://arrow.apache.org/), which uses a **columnar storage format** with strict schema requirements. When you try to concatenate/merge datasets, the library checks if features can be aligned using the [`_check_if_... | 2025-11-18T08:33:04 | 2025-11-30T03:52:07 | null | NONE | null | null | null | null | Hi, I have a question about SFT (Supervised Fine-tuning) for an agent model.
Suppose I want to fine-tune an agent model that may receive two different tools: tool1 and tool2. These tools have different parameters and types in their schema definitions.
When I try to merge datasets containing different tool definitions... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7869/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7869/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7868 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7868/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7868/comments | https://api.github.com/repos/huggingface/datasets/issues/7868/events | https://github.com/huggingface/datasets/issues/7868 | 3,632,429,308 | I_kwDODunzps7Ygnj8 | 7,868 | Data duplication with `split_dataset_by_node` and `interleaved_dataset` | {
"avatar_url": "https://avatars.githubusercontent.com/u/42485228?v=4",
"events_url": "https://api.github.com/users/ValMystletainn/events{/privacy}",
"followers_url": "https://api.github.com/users/ValMystletainn/followers",
"following_url": "https://api.github.com/users/ValMystletainn/following{/other_user}",
... | [] | open | false | null | [] | [
"Hi @ValMystletainn ,\nCan I be assigned this issue?",
"> split_dataset_by_node\n\nHello, I have some questions about your intended use: (1) It seems unnecessary to use interleaving for a single dataset. (2) For multiple datasets, it seems possible to interleave first and then split by node?"
] | 2025-11-17T09:15:24 | 2025-11-29T03:21:34 | null | NONE | null | null | null | null | ### Describe the bug
Data duplication in different rank, when process a iterabledataset with first `split_dataset_by_node` and then `interleaved_dataset`
### Steps to reproduce the bug
I have provide a minimum scripts
```python
import os
from datasets import interleave_datasets, load_dataset
from datasets.distribu... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7868/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7868/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7867 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7867/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7867/comments | https://api.github.com/repos/huggingface/datasets/issues/7867/events | https://github.com/huggingface/datasets/issues/7867 | 3,620,931,722 | I_kwDODunzps7X0wiK | 7,867 | NonMatchingSplitsSizesError when loading partial dataset files | {
"avatar_url": "https://avatars.githubusercontent.com/u/13678719?v=4",
"events_url": "https://api.github.com/users/QingGo/events{/privacy}",
"followers_url": "https://api.github.com/users/QingGo/followers",
"following_url": "https://api.github.com/users/QingGo/following{/other_user}",
"gists_url": "https://a... | [] | open | false | null | [] | [
"While using verification_mode='no_checks' parameter in load_dataset() can bypass this validation, this solution is not intuitive or convenient for most users, especially those who are not familiar with all the parameters of the load_dataset() function.\n\n```python\nbook_corpus_ds = load_dataset(\n \"SaylorTwif... | 2025-11-13T12:03:23 | 2025-11-16T15:39:23 | null | NONE | null | null | null | null | ### Describe the bug
When loading only a subset of dataset files while the dataset's README.md contains split metadata, the system throws a NonMatchingSplitsSizesError . This prevents users from loading partial datasets for quick validation in cases of poor network conditions or very large datasets.
### Steps to repr... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7867/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7867/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7864 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7864/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7864/comments | https://api.github.com/repos/huggingface/datasets/issues/7864/events | https://github.com/huggingface/datasets/issues/7864 | 3,619,137,823 | I_kwDODunzps7Xt6kf | 7,864 | add_column and add_item erroneously(?) require new_fingerprint parameter | {
"avatar_url": "https://avatars.githubusercontent.com/u/17151810?v=4",
"events_url": "https://api.github.com/users/echthesia/events{/privacy}",
"followers_url": "https://api.github.com/users/echthesia/followers",
"following_url": "https://api.github.com/users/echthesia/following{/other_user}",
"gists_url": "... | [] | open | false | null | [] | [
"Take this with a grain of salt, this is just my personal understanding:\nWhile you technically can overwrite the new_fingerprint with a string, e.g.\n```python\nt = d.add_column(\"new_column\", col_value, new_fingerprint=\"dummy_fp\")\nassert t._fingerprint == \"dummy_fp\" # this is true and will pass\n```\nthis ... | 2025-11-13T02:56:49 | 2025-11-24T20:33:59 | null | NONE | null | null | null | null | ### Describe the bug
Contradicting their documentation (which doesn't mention the parameter at all), both Dataset.add_column and Dataset.add_item require a new_fingerprint string. This parameter is passed directly to the dataset constructor, which has the fingerprint parameter listed as optional; is there any reason i... | null | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7864/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7864/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7863 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7863/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7863/comments | https://api.github.com/repos/huggingface/datasets/issues/7863/events | https://github.com/huggingface/datasets/issues/7863 | 3,618,836,821 | I_kwDODunzps7XsxFV | 7,863 | Support hosting lance / vortex / iceberg / zarr datasets on huggingface hub | {
"avatar_url": "https://avatars.githubusercontent.com/u/3664715?v=4",
"events_url": "https://api.github.com/users/pavanramkumar/events{/privacy}",
"followers_url": "https://api.github.com/users/pavanramkumar/followers",
"following_url": "https://api.github.com/users/pavanramkumar/following{/other_user}",
"gi... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | [
"Kudos!",
"So cool ! Would love to see support for lance :)",
"@lhoestq thanks for your support! Any suggestions across `datasets` or `huggingface_hub` projects to make this happen?\n\nI just noticed this blog post: https://huggingface.co/blog/streaming-datasets\n\nDo you know if `hfFileSystem` from `huggingfac... | 2025-11-13T00:51:07 | 2025-11-26T14:10:29 | null | NONE | null | null | null | null | ### Feature request
Huggingface datasets has great support for large tabular datasets in parquet with large partitions. I would love to see two things in the future:
- equivalent support for `lance`, `vortex`, `iceberg`, `zarr` (in that order) in a way that I can stream them using the datasets library
- more fine-gr... | null | {
"+1": 4,
"-1": 0,
"confused": 0,
"eyes": 2,
"heart": 5,
"hooray": 2,
"laugh": 2,
"rocket": 8,
"total_count": 23,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7863/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7863/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7861 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7861/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7861/comments | https://api.github.com/repos/huggingface/datasets/issues/7861/events | https://github.com/huggingface/datasets/issues/7861 | 3,611,821,713 | I_kwDODunzps7XSAaR | 7,861 | Performance Issue: save_to_disk() 200-1200% slower due to unconditional flatten_indices() | {
"avatar_url": "https://avatars.githubusercontent.com/u/222552287?v=4",
"events_url": "https://api.github.com/users/KCKawalkar/events{/privacy}",
"followers_url": "https://api.github.com/users/KCKawalkar/followers",
"following_url": "https://api.github.com/users/KCKawalkar/following{/other_user}",
"gists_url... | [] | open | false | null | [] | [] | 2025-11-11T11:05:38 | 2025-11-11T11:05:38 | null | NONE | null | null | null | null | ## 🐛 Bug Description
The `save_to_disk()` method unconditionally calls `flatten_indices()` when `_indices` is not None, causing severe performance degradation for datasets processed with filtering, shuffling, or multiprocessed mapping operations.
**Root cause**: This line rebuilds the entire dataset unnecessarily:
`... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7861/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7861/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7856 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7856/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7856/comments | https://api.github.com/repos/huggingface/datasets/issues/7856/events | https://github.com/huggingface/datasets/issues/7856 | 3,603,729,142 | I_kwDODunzps7WzIr2 | 7,856 | Missing transcript column when loading a local dataset with "audiofolder" | {
"avatar_url": "https://avatars.githubusercontent.com/u/10166907?v=4",
"events_url": "https://api.github.com/users/gweltou/events{/privacy}",
"followers_url": "https://api.github.com/users/gweltou/followers",
"following_url": "https://api.github.com/users/gweltou/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | [
"First bad commit 5c8869f8c36dbc8c8d423030b7b7c4fd64f8c729\n\nEDIT: This is not a bug or a regression. It was a breaking change introduced in the commit I mentioned and was also documented in there. The docs state how to handle this now, see https://huggingface.co/docs/datasets/main/en/audio_load#audiofolder-with-m... | 2025-11-08T16:27:58 | 2025-11-09T12:13:38 | 2025-11-09T12:13:38 | NONE | null | null | null | null | ### Describe the bug
My local dataset is not properly loaded when using `load_dataset("audiofolder", data_dir="my_dataset")` with a `jsonl` metadata file.
Only the `audio` column is read while the `transcript` column is not.
The last tested `datasets` version where the behavior was still correct is 2.18.0.
### Steps... | {
"avatar_url": "https://avatars.githubusercontent.com/u/10166907?v=4",
"events_url": "https://api.github.com/users/gweltou/events{/privacy}",
"followers_url": "https://api.github.com/users/gweltou/followers",
"following_url": "https://api.github.com/users/gweltou/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7856/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7856/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 19:45:40 |
https://api.github.com/repos/huggingface/datasets/issues/7852 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7852/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7852/comments | https://api.github.com/repos/huggingface/datasets/issues/7852/events | https://github.com/huggingface/datasets/issues/7852 | 3,595,450,602 | I_kwDODunzps7WTjjq | 7,852 | Problems with NifTI | {
"avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4",
"events_url": "https://api.github.com/users/CloseChoice/events{/privacy}",
"followers_url": "https://api.github.com/users/CloseChoice/followers",
"following_url": "https://api.github.com/users/CloseChoice/following{/other_user}",
"gists_u... | [] | closed | false | null | [] | [
"> 2. when uploading via the niftifolder feature, the resulting parquet only contains relative paths to the nifti files:\n\nwhat did you use to upload the dataset ? iirc push_to_hub() does upload the bytes as well, but to_parquet() doesn't",
"> > 2. when uploading via the niftifolder feature, the resulting parque... | 2025-11-06T11:46:33 | 2025-11-06T16:20:38 | 2025-11-06T16:20:38 | CONTRIBUTOR | null | null | null | null | ### Describe the bug
There are currently 2 problems with the new NifTI feature:
1. dealing with zipped files, this is mentioned and explained [here](https://github.com/huggingface/datasets/pull/7815#issuecomment-3496199503)
2. when uploading via the `niftifolder` feature, the resulting parquet only contains relative p... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7852/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7852/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 4:34:05 |
https://api.github.com/repos/huggingface/datasets/issues/7842 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7842/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7842/comments | https://api.github.com/repos/huggingface/datasets/issues/7842/events | https://github.com/huggingface/datasets/issues/7842 | 3,582,182,995 | I_kwDODunzps7Vg8ZT | 7,842 | Transform with columns parameter triggers on non-specified column access | {
"avatar_url": "https://avatars.githubusercontent.com/u/18426892?v=4",
"events_url": "https://api.github.com/users/mr-brobot/events{/privacy}",
"followers_url": "https://api.github.com/users/mr-brobot/followers",
"following_url": "https://api.github.com/users/mr-brobot/following{/other_user}",
"gists_url": "... | [] | closed | false | null | [] | [] | 2025-11-03T13:55:27 | 2025-11-03T14:34:13 | 2025-11-03T14:34:13 | NONE | null | null | null | null | ### Describe the bug
Iterating over a [`Column`](https://github.com/huggingface/datasets/blob/8b1bd4ec1cc9e9ce022f749abb6485ef984ae7c0/src/datasets/arrow_dataset.py#L633-L692) iterates through the parent [`Dataset`](https://github.com/huggingface/datasets/blob/8b1bd4ec1cc9e9ce022f749abb6485ef984ae7c0/src/datasets/arro... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7842/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7842/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 0:38:46 |
https://api.github.com/repos/huggingface/datasets/issues/7841 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7841/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7841/comments | https://api.github.com/repos/huggingface/datasets/issues/7841/events | https://github.com/huggingface/datasets/issues/7841 | 3,579,506,747 | I_kwDODunzps7VWvA7 | 7,841 | DOC: `mode` parameter on pdf and video features unused | {
"avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4",
"events_url": "https://api.github.com/users/CloseChoice/events{/privacy}",
"followers_url": "https://api.github.com/users/CloseChoice/followers",
"following_url": "https://api.github.com/users/CloseChoice/following{/other_user}",
"gists_u... | [] | closed | false | null | [] | [
"They seem to be artefacts from a copy-paste of the Image feature ^^' we should remove them"
] | 2025-11-02T12:37:47 | 2025-11-05T14:04:04 | 2025-11-05T14:04:04 | CONTRIBUTOR | null | null | null | null | Following up on https://github.com/huggingface/datasets/pull/7840 I asked claude code to check for undocumented parameters for other features and it found:
- mode parameter on video is documented but unused: https://github.com/huggingface/datasets/blob/main/src/datasets/features/video.py#L48-L49
- the same goes for the... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7841/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7841/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 3 days, 1:26:17 |
https://api.github.com/repos/huggingface/datasets/issues/7839 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7839/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7839/comments | https://api.github.com/repos/huggingface/datasets/issues/7839/events | https://github.com/huggingface/datasets/issues/7839 | 3,579,121,843 | I_kwDODunzps7VVRCz | 7,839 | datasets doesn't work with python 3.14 | {
"avatar_url": "https://avatars.githubusercontent.com/u/4789087?v=4",
"events_url": "https://api.github.com/users/zachmoshe/events{/privacy}",
"followers_url": "https://api.github.com/users/zachmoshe/followers",
"following_url": "https://api.github.com/users/zachmoshe/following{/other_user}",
"gists_url": "h... | [] | closed | false | null | [] | [
"Thanks for the report.\nHave you tried on main? This should work, there was recently a PR merged to address this problem, see #7817",
"Works on main 👍 \nWhat's the release schedule for `datasets`? Seems like a cadence of ~2weeks so I assume a real version is due pretty soon?",
"let's say we do a new release l... | 2025-11-02T09:09:06 | 2025-11-04T14:02:25 | 2025-11-04T14:02:25 | NONE | null | null | null | null | ### Describe the bug
Seems that `dataset` doesn't work with python==3.14. The root cause seems to be something with a `deel` API that was changed.
```
TypeError: Pickler._batch_setitems() takes 2 positional arguments but 3 were given
```
### Steps to reproduce the bug
(on a new folder)
uv init
uv python pin 3.14
uv... | {
"avatar_url": "https://avatars.githubusercontent.com/u/4789087?v=4",
"events_url": "https://api.github.com/users/zachmoshe/events{/privacy}",
"followers_url": "https://api.github.com/users/zachmoshe/followers",
"following_url": "https://api.github.com/users/zachmoshe/following{/other_user}",
"gists_url": "h... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7839/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7839/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 2 days, 4:53:19 |
https://api.github.com/repos/huggingface/datasets/issues/7837 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7837/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7837/comments | https://api.github.com/repos/huggingface/datasets/issues/7837/events | https://github.com/huggingface/datasets/issues/7837 | 3,575,454,726 | I_kwDODunzps7VHRwG | 7,837 | mono parameter to the Audio feature is missing | {
"avatar_url": "https://avatars.githubusercontent.com/u/1250234?v=4",
"events_url": "https://api.github.com/users/ernestum/events{/privacy}",
"followers_url": "https://api.github.com/users/ernestum/followers",
"following_url": "https://api.github.com/users/ernestum/following{/other_user}",
"gists_url": "http... | [] | closed | false | null | [] | [
"Hey, we removed the misleading passage in the docstring and enabled support for `num_channels` as torchcodec does",
"thanks!"
] | 2025-10-31T15:41:39 | 2025-11-03T15:59:18 | 2025-11-03T14:24:12 | NONE | null | null | null | null | According to the docs, there is a "mono" parameter to the Audio feature, which turns any stereo into mono. In practice the signal is not touched and the mono parameter, even though documented, does not exist.
https://github.com/huggingface/datasets/blob/41c05299348a499807432ab476e1cdc4143c8772/src/datasets/features/a... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7837/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7837/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 2 days, 22:42:33 |
https://api.github.com/repos/huggingface/datasets/issues/7834 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7834/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7834/comments | https://api.github.com/repos/huggingface/datasets/issues/7834/events | https://github.com/huggingface/datasets/issues/7834 | 3,558,802,959 | I_kwDODunzps7UHwYP | 7,834 | Audio.cast_column() or Audio.decode_example() causes Colab kernel crash (std::bad_alloc) | {
"avatar_url": "https://avatars.githubusercontent.com/u/2559570?v=4",
"events_url": "https://api.github.com/users/rachidio/events{/privacy}",
"followers_url": "https://api.github.com/users/rachidio/followers",
"following_url": "https://api.github.com/users/rachidio/following{/other_user}",
"gists_url": "http... | [] | open | false | null | [] | [
"Hi ! `datasets` v4 uses `torchcodec` for audio decoding (previous versions were using `soundfile`). What is your `torchcodec` version ? Can you try other versions of `torchcodec` and see if it works ?",
"When I install `datasets` with `pip install datasets[audio]` it install this version of `torchcodec`:\n```\nN... | 2025-10-27T22:02:00 | 2025-11-15T16:28:04 | null | NONE | null | null | null | null | ### Describe the bug
When using the huggingface datasets.Audio feature to decode a local or remote (public HF dataset) audio file inside Google Colab, the notebook kernel crashes with std::bad_alloc (C++ memory allocation failure).
The crash happens even with a minimal code example and valid .wav file that can be read... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7834/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7834/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7832 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7832/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7832/comments | https://api.github.com/repos/huggingface/datasets/issues/7832/events | https://github.com/huggingface/datasets/issues/7832 | 3,555,991,552 | I_kwDODunzps7T9CAA | 7,832 | [DOCS][minor] TIPS paragraph not compiled in docs/stream | {
"avatar_url": "https://avatars.githubusercontent.com/u/110672812?v=4",
"events_url": "https://api.github.com/users/art-test-stack/events{/privacy}",
"followers_url": "https://api.github.com/users/art-test-stack/followers",
"following_url": "https://api.github.com/users/art-test-stack/following{/other_user}",
... | [] | closed | false | null | [] | [] | 2025-10-27T10:03:22 | 2025-10-27T10:10:54 | 2025-10-27T10:10:54 | CONTRIBUTOR | null | null | null | null | In the client documentation, the markdown 'TIP' paragraph for paragraph in docs/stream#shuffle is not well executed — not as the other in the same page / while markdown is correctly considering it.
Documentation:
https://huggingface.co/docs/datasets/v4.3.0/en/stream#shuffle:~:text=%5B!TIP%5D%5BIterableDataset.shuffle(... | {
"avatar_url": "https://avatars.githubusercontent.com/u/110672812?v=4",
"events_url": "https://api.github.com/users/art-test-stack/events{/privacy}",
"followers_url": "https://api.github.com/users/art-test-stack/followers",
"following_url": "https://api.github.com/users/art-test-stack/following{/other_user}",
... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7832/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7832/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 0:07:32 |
https://api.github.com/repos/huggingface/datasets/issues/7829 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7829/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7829/comments | https://api.github.com/repos/huggingface/datasets/issues/7829/events | https://github.com/huggingface/datasets/issues/7829 | 3,548,584,085 | I_kwDODunzps7TgxiV | 7,829 | Memory leak / Large memory usage with num_workers = 0 and numerous dataset within DatasetDict | {
"avatar_url": "https://avatars.githubusercontent.com/u/24591024?v=4",
"events_url": "https://api.github.com/users/raphaelsty/events{/privacy}",
"followers_url": "https://api.github.com/users/raphaelsty/followers",
"following_url": "https://api.github.com/users/raphaelsty/following{/other_user}",
"gists_url"... | [] | open | false | null | [] | [
"Thanks for the report, this is possibly related #7722 and #7694.\n\nCould you pls provide steps to reproduce this?",
"To overcome this issue right now I did simply reduce the size of the dataset and ended up running a for loop (my training has now a constant learning rate schedule). From what I understood, and I... | 2025-10-24T09:51:38 | 2025-11-06T13:31:26 | null | NONE | null | null | null | null | ### Describe the bug
Hi team, first off, I love the datasets library! 🥰
I'm encountering a potential memory leak / increasing memory usage when training a model on a very large DatasetDict.
Setup: I have a DatasetDict containing 362 distinct datasets, which sum up to ~2.8 billion rows.
Training Task: I'm performin... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7829/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7829/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7821 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7821/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7821/comments | https://api.github.com/repos/huggingface/datasets/issues/7821/events | https://github.com/huggingface/datasets/issues/7821 | 3,520,913,195 | I_kwDODunzps7R3N8r | 7,821 | Building a dataset with large variable size arrays results in error ArrowInvalid: Value X too large to fit in C integer type | {
"avatar_url": "https://avatars.githubusercontent.com/u/51880718?v=4",
"events_url": "https://api.github.com/users/kkoutini/events{/privacy}",
"followers_url": "https://api.github.com/users/kkoutini/followers",
"following_url": "https://api.github.com/users/kkoutini/following{/other_user}",
"gists_url": "htt... | [] | open | false | null | [] | [
"Thanks for reporting ! You can fix this by specifying the output type explicitly and use `LargeList` which uses int64 for offsets:\n\n```python\nfeatures = Features({\"audio\": LargeList(Value(\"uint16\"))})\nds = ds.map(..., features=features)\n```\n\nIt would be cool to improve `list_of_pa_arrays_to_pyarrow_list... | 2025-10-16T08:45:17 | 2025-10-20T13:42:05 | null | CONTRIBUTOR | null | null | null | null | ### Describe the bug
I used map to store raw audio waveforms of variable lengths in a column of a dataset the `map` call fails with ArrowInvalid: Value X too large to fit in C integer type.
```
Traceback (most recent call last):
Traceback (most recent call last):
File "...lib/python3.12/site-packages/multiprocess/p... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7821/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7821/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7819 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7819/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7819/comments | https://api.github.com/repos/huggingface/datasets/issues/7819/events | https://github.com/huggingface/datasets/issues/7819 | 3,517,086,110 | I_kwDODunzps7Ronme | 7,819 | Cannot download opus dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/51946663?v=4",
"events_url": "https://api.github.com/users/liamsun2019/events{/privacy}",
"followers_url": "https://api.github.com/users/liamsun2019/followers",
"following_url": "https://api.github.com/users/liamsun2019/following{/other_user}",
"gists_u... | [] | open | false | null | [] | [
"Hi ! it seems \"en-zh\" doesn't exist for this dataset\n\nYou can see the list of subsets here: https://huggingface.co/datasets/Helsinki-NLP/opus_books"
] | 2025-10-15T09:06:19 | 2025-10-20T13:45:16 | null | NONE | null | null | null | null | When I tried to download opus_books using:
from datasets import load_dataset
dataset = load_dataset("Helsinki-NLP/opus_books")
I got the following errors:
FileNotFoundError: Couldn't find any data file at /workspace/Helsinki-NLP/opus_books. Couldn't find 'Helsinki-NLP/opus_books' on the Hugging Face Hub either: Local... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7819/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7819/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7818 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7818/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7818/comments | https://api.github.com/repos/huggingface/datasets/issues/7818/events | https://github.com/huggingface/datasets/issues/7818 | 3,515,887,618 | I_kwDODunzps7RkDAC | 7,818 | train_test_split and stratify breaks with Numpy 2.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/24845694?v=4",
"events_url": "https://api.github.com/users/davebulaval/events{/privacy}",
"followers_url": "https://api.github.com/users/davebulaval/followers",
"following_url": "https://api.github.com/users/davebulaval/following{/other_user}",
"gists_u... | [] | closed | false | null | [] | [
"I can't reproduce this. Could you pls provide an example with a public dataset/artificial dataset and show how you loaded that?\n\nThis works for me:\n\n```python\nimport numpy as np\nfrom datasets import Dataset, Features, ClassLabel, Value\n\ndata = {\"text\": [f\"sample_{i}\" for i in range(100)], \"label\": [i... | 2025-10-15T00:01:19 | 2025-10-28T16:10:44 | 2025-10-28T16:10:44 | NONE | null | null | null | null | ### Describe the bug
As stated in the title, since Numpy changed in version >2.0 with copy, the stratify parameters break.
e.g. `all_dataset.train_test_split(test_size=0.2,stratify_by_column="label")` returns a Numpy error.
It works if you downgrade Numpy to a version lower than 2.0.
### Steps to reproduce the bug
... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7818/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7818/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 13 days, 16:09:25 |
https://api.github.com/repos/huggingface/datasets/issues/7816 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7816/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7816/comments | https://api.github.com/repos/huggingface/datasets/issues/7816/events | https://github.com/huggingface/datasets/issues/7816 | 3,512,210,206 | I_kwDODunzps7RWBMe | 7,816 | disable_progress_bar() not working as expected | {
"avatar_url": "https://avatars.githubusercontent.com/u/5577741?v=4",
"events_url": "https://api.github.com/users/windmaple/events{/privacy}",
"followers_url": "https://api.github.com/users/windmaple/followers",
"following_url": "https://api.github.com/users/windmaple/following{/other_user}",
"gists_url": "h... | [] | closed | false | null | [] | [
"@xianbaoqian ",
"Closing this one since it's a Xet issue."
] | 2025-10-14T03:25:39 | 2025-10-14T23:49:26 | 2025-10-14T23:49:26 | NONE | null | null | null | null | ### Describe the bug
Hi,
I'm trying to load a dataset on Kaggle TPU image. There is some known compat issue with progress bar on Kaggle, so I'm trying to disable the progress bar globally. This does not work as you can see in [here](https://www.kaggle.com/code/windmaple/hf-datasets-issue).
In contract, disabling pro... | {
"avatar_url": "https://avatars.githubusercontent.com/u/5577741?v=4",
"events_url": "https://api.github.com/users/windmaple/events{/privacy}",
"followers_url": "https://api.github.com/users/windmaple/followers",
"following_url": "https://api.github.com/users/windmaple/following{/other_user}",
"gists_url": "h... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7816/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7816/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 20:23:47 |
https://api.github.com/repos/huggingface/datasets/issues/7813 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7813/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7813/comments | https://api.github.com/repos/huggingface/datasets/issues/7813/events | https://github.com/huggingface/datasets/issues/7813 | 3,503,446,288 | I_kwDODunzps7Q0lkQ | 7,813 | Caching does not work when using python3.14 | {
"avatar_url": "https://avatars.githubusercontent.com/u/142020129?v=4",
"events_url": "https://api.github.com/users/intexcor/events{/privacy}",
"followers_url": "https://api.github.com/users/intexcor/followers",
"following_url": "https://api.github.com/users/intexcor/following{/other_user}",
"gists_url": "ht... | [] | closed | false | null | [] | [
"https://github.com/uqfoundation/dill/issues/725",
"@intexcor does #7817 fix your problem?"
] | 2025-10-10T15:36:46 | 2025-10-27T17:08:26 | 2025-10-27T17:08:26 | NONE | null | null | null | null | ### Describe the bug
Traceback (most recent call last):
File "/workspace/ctn.py", line 8, in <module>
ds = load_dataset(f"naver-clova-ix/synthdog-{lang}") # или "synthdog-zh" для китайского
File "/workspace/.venv/lib/python3.14/site-packages/datasets/load.py", line 1397, in load_dataset
builder_instance =... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7813/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7813/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 17 days, 1:31:40 |
https://api.github.com/repos/huggingface/datasets/issues/7811 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7811/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7811/comments | https://api.github.com/repos/huggingface/datasets/issues/7811/events | https://github.com/huggingface/datasets/issues/7811 | 3,500,741,658 | I_kwDODunzps7QqRQa | 7,811 | SIGSEGV when Python exits due to near null deref | {
"avatar_url": "https://avatars.githubusercontent.com/u/5192353?v=4",
"events_url": "https://api.github.com/users/iankronquist/events{/privacy}",
"followers_url": "https://api.github.com/users/iankronquist/followers",
"following_url": "https://api.github.com/users/iankronquist/following{/other_user}",
"gists... | [] | open | false | null | [] | [
"The issue seems to come from `dill` which is a `datasets` dependency, e.g. this segfaults:\n\n```python\nimport dill\nfrom tqdm import tqdm\nprogress_bar = tqdm(total=(1000), unit='cols', desc='cols ')\nprogress_bar.update(1)\n```\n\n`tqdm` seems to segfault when `dill` is imported. I only found this about segfaul... | 2025-10-09T22:00:11 | 2025-10-10T22:09:24 | null | NONE | null | null | null | null | ### Describe the bug
When I run the following python script using datasets I get a segfault.
```python
from datasets import load_dataset
from tqdm import tqdm
progress_bar = tqdm(total=(1000), unit='cols', desc='cols ')
progress_bar.update(1)
```
```
% lldb -- python3 crashmin.py
(lldb) target create "python3"
Cur... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7811/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7811/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7804 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7804/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7804/comments | https://api.github.com/repos/huggingface/datasets/issues/7804/events | https://github.com/huggingface/datasets/issues/7804 | 3,498,534,596 | I_kwDODunzps7Qh2bE | 7,804 | Support scientific data formats | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | open | false | null | [] | [
"Please add the support for `Zarr`! That's what we use in the Bioimaging community. It is crucial, because raw upload of a *single* bio image can take _terrabytes in memory_!\n\nThe python library would be `bioio` or `zarr`:\n- [ ] Zarr: `bioio` or `zarr`\n\nSee a [Zarr example](https://ome.github.io/ome-ngff-valid... | 2025-10-09T10:18:24 | 2025-11-26T16:09:43 | null | MEMBER | null | null | null | null | List of formats and libraries we can use to load the data in `datasets`:
- [ ] DICOMs: pydicom
- [x] NIfTIs: nibabel
- [ ] WFDB: wfdb
cc @zaRizk7 for viz
Feel free to comment / suggest other formats and libs you'd like to see or to share your interest in one of the mentioned format | null | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 5,
"hooray": 4,
"laugh": 0,
"rocket": 0,
"total_count": 10,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7804/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7804/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7802 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7802/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7802/comments | https://api.github.com/repos/huggingface/datasets/issues/7802/events | https://github.com/huggingface/datasets/issues/7802 | 3,497,454,119 | I_kwDODunzps7Qduon | 7,802 | [Docs] Missing documentation for `Dataset.from_dict` | {
"avatar_url": "https://avatars.githubusercontent.com/u/69421545?v=4",
"events_url": "https://api.github.com/users/aaronshenhao/events{/privacy}",
"followers_url": "https://api.github.com/users/aaronshenhao/followers",
"following_url": "https://api.github.com/users/aaronshenhao/following{/other_user}",
"gist... | [] | open | false | null | [] | [
"I'd like to work on this documentation issue.",
"Hi I'd like to work on this. I can see the docstring is already in the code. \nCould you confirm:\n1. Is this still available?\n2. Should I add this to the main_classes.md file, or is there a specific \n documentation file I should update?\n3. Are there any form... | 2025-10-09T02:54:41 | 2025-10-19T16:09:33 | null | NONE | null | null | null | null | Documentation link: https://huggingface.co/docs/datasets/en/package_reference/main_classes
Link to method (docstring present): https://github.com/huggingface/datasets/blob/6f2502c5a026caa89839713f6f7c8b958e5e83eb/src/datasets/arrow_dataset.py#L1029
The docstring is present for the function, but seems missing from the... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7802/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7802/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7798 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7798/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7798/comments | https://api.github.com/repos/huggingface/datasets/issues/7798/events | https://github.com/huggingface/datasets/issues/7798 | 3,484,470,782 | I_kwDODunzps7PsM3- | 7,798 | Audio dataset is not decoding on 4.1.1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/61390950?v=4",
"events_url": "https://api.github.com/users/thewh1teagle/events{/privacy}",
"followers_url": "https://api.github.com/users/thewh1teagle/followers",
"following_url": "https://api.github.com/users/thewh1teagle/following{/other_user}",
"gist... | [] | open | false | null | [] | [
"Previously (datasets<=3.6.0), audio columns were decoded automatically when accessing a row. Now, for performance reasons, audio decoding is lazy by default: you just see the file path unless you explicitly cast the column to Audio.\n\nHere’s the fix (following the current [datasets audio docs](https://huggingface... | 2025-10-05T06:37:50 | 2025-10-06T14:07:55 | null | NONE | null | null | null | null | ### Describe the bug
The audio column remain as non-decoded objects even when accessing them.
```python
dataset = load_dataset("MrDragonFox/Elise", split = "train")
dataset[0] # see that it doesn't show 'array' etc...
```
Works fine with `datasets==3.6.0`
Followed the docs in
- https://huggingface.co/docs/dataset... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7798/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7798/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7793 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7793/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7793/comments | https://api.github.com/repos/huggingface/datasets/issues/7793/events | https://github.com/huggingface/datasets/issues/7793 | 3,459,496,971 | I_kwDODunzps7OM7wL | 7,793 | Cannot load dataset, fails with nested data conversions not implemented for chunked array outputs | {
"avatar_url": "https://avatars.githubusercontent.com/u/41182432?v=4",
"events_url": "https://api.github.com/users/neevparikh/events{/privacy}",
"followers_url": "https://api.github.com/users/neevparikh/followers",
"following_url": "https://api.github.com/users/neevparikh/following{/other_user}",
"gists_url"... | [] | open | false | null | [] | [
"Hey @neevparikh,\nThanks for reporting this! I can reproduce the issue and have identified the root cause.\nProblem: The metr-evals/malt-public dataset contains deeply nested conversation data that exceeds PyArrow's 16MB chunk limit. When PyArrow tries to read it in chunks, it hits a fundamental limitation: \"Nest... | 2025-09-27T01:03:12 | 2025-09-27T21:35:31 | null | NONE | null | null | null | null | ### Describe the bug
Hi! When I load this dataset, it fails with a pyarrow error. I'm using datasets 4.1.1, though I also see this with datasets 4.1.2
To reproduce:
```
import datasets
ds = datasets.load_dataset(path="metr-evals/malt-public", name="irrelevant_detail")
```
Error:
```
Traceback (most recent call las... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7793/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7793/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7792 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7792/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7792/comments | https://api.github.com/repos/huggingface/datasets/issues/7792/events | https://github.com/huggingface/datasets/issues/7792 | 3,456,802,210 | I_kwDODunzps7OCp2i | 7,792 | Concatenate IterableDataset instances and distribute underlying shards in a RoundRobin manner | {
"avatar_url": "https://avatars.githubusercontent.com/u/13559010?v=4",
"events_url": "https://api.github.com/users/LTMeyer/events{/privacy}",
"followers_url": "https://api.github.com/users/LTMeyer/followers",
"following_url": "https://api.github.com/users/LTMeyer/following{/other_user}",
"gists_url": "https:... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | [
"# With `datasets.Dataset`\n\nHere is an small script that shows the distribution differences of samples between `interleave_datasets`, `concatenate_datasets` and `concatenate_datasets` + shuffling.\n\n```python\nimport datasets as hf_datasets\n\ndef gen(dataset: int, n_samples: int):\n for i in range(n_samples)... | 2025-09-26T10:05:19 | 2025-10-15T18:05:23 | 2025-10-15T18:05:23 | NONE | null | null | null | null | ### Feature request
I would like to be able to concatenate multiple `IterableDataset` with possibly different features. I would like to then be able to stream the results in parallel (both using DDP and multiple workers in the pytorch DataLoader). I want the merge of datasets to be well balanced between the different ... | {
"avatar_url": "https://avatars.githubusercontent.com/u/13559010?v=4",
"events_url": "https://api.github.com/users/LTMeyer/events{/privacy}",
"followers_url": "https://api.github.com/users/LTMeyer/followers",
"following_url": "https://api.github.com/users/LTMeyer/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7792/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7792/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 19 days, 8:00:04 |
https://api.github.com/repos/huggingface/datasets/issues/7883 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7883/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7883/comments | https://api.github.com/repos/huggingface/datasets/issues/7883/events | https://github.com/huggingface/datasets/issues/7883 | 3,668,182,561 | I_kwDODunzps7apAYh | 7,883 | Data.to_csv() cannot be recognized by pylance | {
"avatar_url": "https://avatars.githubusercontent.com/u/154290630?v=4",
"events_url": "https://api.github.com/users/xi4ngxin/events{/privacy}",
"followers_url": "https://api.github.com/users/xi4ngxin/followers",
"following_url": "https://api.github.com/users/xi4ngxin/following{/other_user}",
"gists_url": "ht... | [] | open | false | null | [] | [] | 2025-11-26T16:16:56 | 2025-11-26T16:16:56 | null | NONE | null | null | null | null | ### Describe the bug
Hi, everyone ! I am a beginner with datasets.
I am testing reading multiple CSV files from a zip archive. The result of reading the dataset shows success, and it can ultimately be correctly saved to CSV.
Intermediate results:
```
Generating train split: 62973 examples [00:00, 175939.01 examples/... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7883/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7883/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7882 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7882/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7882/comments | https://api.github.com/repos/huggingface/datasets/issues/7882/events | https://github.com/huggingface/datasets/issues/7882 | 3,667,664,527 | I_kwDODunzps7anB6P | 7,882 | Inconsistent loading of LFS-hosted files in epfml/FineWeb-HQ dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/6270922?v=4",
"events_url": "https://api.github.com/users/Oligou/events{/privacy}",
"followers_url": "https://api.github.com/users/Oligou/followers",
"following_url": "https://api.github.com/users/Oligou/following{/other_user}",
"gists_url": "https://ap... | [] | open | false | null | [] | [] | 2025-11-26T14:06:02 | 2025-11-26T14:06:02 | null | NONE | null | null | null | null | ### Describe the bug
Some files in the `epfml/FineWeb-HQ` dataset fail to load via the Hugging Face `datasets` library.
- xet-hosted files load fine
- LFS-hosted files sometimes fail
Example:
- Fails: https://huggingface.co/datasets/epfml/FineWeb-HQ/blob/main/data/CC-MAIN-2024-26/000_00003.parquet
- Works: ht... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7882/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7882/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7880 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7880/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7880/comments | https://api.github.com/repos/huggingface/datasets/issues/7880/events | https://github.com/huggingface/datasets/issues/7880 | 3,667,561,864 | I_kwDODunzps7amo2I | 7,880 | Spurious label column created when audiofolder/imagefolder directories match split names | {
"avatar_url": "https://avatars.githubusercontent.com/u/132138786?v=4",
"events_url": "https://api.github.com/users/neha222222/events{/privacy}",
"followers_url": "https://api.github.com/users/neha222222/followers",
"following_url": "https://api.github.com/users/neha222222/following{/other_user}",
"gists_url... | [] | open | false | null | [] | [] | 2025-11-26T13:36:24 | 2025-11-26T13:36:24 | null | NONE | null | null | null | null | ## Describe the bug
When using `audiofolder` or `imagefolder` with directories for **splits** (train/test) rather than class labels, a spurious `label` column is incorrectly created.
**Example:** https://huggingface.co/datasets/datasets-examples/doc-audio-4
```
from datasets import load_dataset
ds = load_dataset("dat... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7880/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7880/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7879 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7879/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7879/comments | https://api.github.com/repos/huggingface/datasets/issues/7879/events | https://github.com/huggingface/datasets/issues/7879 | 3,657,249,446 | I_kwDODunzps7Z_TKm | 7,879 | python core dump when downloading dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/5960219?v=4",
"events_url": "https://api.github.com/users/hansewetz/events{/privacy}",
"followers_url": "https://api.github.com/users/hansewetz/followers",
"following_url": "https://api.github.com/users/hansewetz/following{/other_user}",
"gists_url": "h... | [] | open | false | null | [] | [
"Hi @hansewetz I'm curious, for me it works just fine. Are you still observing the issue?",
"Yup ... still the same issue.\nHowever, after adding a ```sleep(1)``` call after the ``` for``` loop by accident during debugging, the program terminates properly (not a good solution though ... :-) ).\nAre there some thr... | 2025-11-24T06:22:53 | 2025-11-25T20:45:55 | null | NONE | null | null | null | null | ### Describe the bug
When downloading a dataset in streamed mode and exiting the program before the download completes, the python program core dumps when exiting:
```
terminate called without an active exception
Aborted (core dumped)
```
Tested with python 3.12.3, python 3.9.21
### Steps to reproduce the bug
Cr... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7879/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7879/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7877 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7877/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7877/comments | https://api.github.com/repos/huggingface/datasets/issues/7877/events | https://github.com/huggingface/datasets/issues/7877 | 3,652,906,788 | I_kwDODunzps7Zuu8k | 7,877 | work around `tempfile` silently ignoring `TMPDIR` if the dir doesn't exist | {
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"events_url": "https://api.github.com/users/stas00/events{/privacy}",
"followers_url": "https://api.github.com/users/stas00/followers",
"following_url": "https://api.github.com/users/stas00/following{/other_user}",
"gists_url": "https://a... | [] | open | false | null | [] | [
"Hi! Just created a Pull Request (#7890) to try to fix this using your suggestions. I hope it helps!"
] | 2025-11-21T19:51:48 | 2025-11-29T20:37:42 | null | CONTRIBUTOR | null | null | null | null | This should help a lot of users running into `No space left on device` while using `datasets`. Normally the issue is is that `/tmp` is too small and the user needs to use another path, which they would normally set as `export TMPDIR=/some/big/storage`
However, the `tempfile` facility that `datasets` and `pyarrow` use ... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7877/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7877/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7872 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7872/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7872/comments | https://api.github.com/repos/huggingface/datasets/issues/7872/events | https://github.com/huggingface/datasets/issues/7872 | 3,643,681,893 | I_kwDODunzps7ZLixl | 7,872 | IterableDataset does not use features information in to_pandas | {
"avatar_url": "https://avatars.githubusercontent.com/u/790640?v=4",
"events_url": "https://api.github.com/users/bonext/events{/privacy}",
"followers_url": "https://api.github.com/users/bonext/followers",
"following_url": "https://api.github.com/users/bonext/following{/other_user}",
"gists_url": "https://api... | [] | open | false | null | [] | [
"Created A PR!",
"Another test script that can be used to test the behavior - \n\n```\nimport datasets\nfrom datasets import features\n\ndef test_crash():\n common_features = features.Features({\n \"a\": features.Value(\"int64\"),\n \"b\": features.List({\"c\": features.Value(\"int64\")}),\n }... | 2025-11-19T17:12:59 | 2025-11-19T18:52:14 | null | NONE | null | null | null | null | ### Describe the bug
`IterableDataset` created from generator with explicit `features=` parameter seems to ignore provided features description for certain operations, e.g. `.to_pandas(...)` when data coming from the generator has missing values.
### Steps to reproduce the bug
```python
import datasets
from datasets... | null | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7872/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7872/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7871 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7871/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7871/comments | https://api.github.com/repos/huggingface/datasets/issues/7871/events | https://github.com/huggingface/datasets/issues/7871 | 3,643,607,371 | I_kwDODunzps7ZLQlL | 7,871 | Reqwest Error: HTTP status client error (429 Too Many Requests) | {
"avatar_url": "https://avatars.githubusercontent.com/u/26405281?v=4",
"events_url": "https://api.github.com/users/yanan1116/events{/privacy}",
"followers_url": "https://api.github.com/users/yanan1116/followers",
"following_url": "https://api.github.com/users/yanan1116/following{/other_user}",
"gists_url": "... | [] | open | false | null | [] | [
"the dataset repo: `https://huggingface.co/datasets/nvidia/PhysicalAI-Robotics-GR00T-X-Embodiment-Sim`",
"Hi @yanan1116,\n\nThanks for the detailed report! However, this issue was filed in the wrong repository. This is a `huggingface_hub` issue, not a `datasets` issue.\n\nLooking at your traceback, you're using t... | 2025-11-19T16:52:24 | 2025-11-30T03:32:00 | null | NONE | null | null | null | null | ### Describe the bug
full error message:
```
Traceback (most recent call last):
File "/home/yanan/miniconda3/bin/hf", line 7, in <module>
sys.exit(main())
~~~~^^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/cli/hf.py", line 56, in main
app()
~~~^^
File "/ho... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7871/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7871/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7870 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7870/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7870/comments | https://api.github.com/repos/huggingface/datasets/issues/7870/events | https://github.com/huggingface/datasets/issues/7870 | 3,642,209,953 | I_kwDODunzps7ZF7ah | 7,870 | Visualization for Medical Imaging Datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4",
"events_url": "https://api.github.com/users/CloseChoice/events{/privacy}",
"followers_url": "https://api.github.com/users/CloseChoice/followers",
"following_url": "https://api.github.com/users/CloseChoice/following{/other_user}",
"gists_u... | [] | closed | false | null | [] | [
"It would be amazing to be able to show the Papaya UI in google colab / jupyter notebook. IIRC both allow serving javascript via nbextensions that we can surely use in HTML() objects.\n\nAlternatively we could also start with a simple approach and dump the medical image data as a video file that goes through the sl... | 2025-11-19T11:05:39 | 2025-11-21T12:31:19 | 2025-11-21T12:31:19 | CONTRIBUTOR | null | null | null | null | This is a followup to: https://github.com/huggingface/datasets/pull/7815.
I checked the possibilities to visualize the nifti (and potentially dicom), and here's what I found:
- https://github.com/aces/brainbrowser, AGPL3 license, last commit 3 months ago, latest (github) release from 2017. It's available on jsdelivr... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7870/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7870/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 2 days, 1:25:40 |
https://api.github.com/repos/huggingface/datasets/issues/7869 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7869/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7869/comments | https://api.github.com/repos/huggingface/datasets/issues/7869/events | https://github.com/huggingface/datasets/issues/7869 | 3,636,808,734 | I_kwDODunzps7YxUwe | 7,869 | Why does dataset merge fail when tools have different parameters? | {
"avatar_url": "https://avatars.githubusercontent.com/u/116297296?v=4",
"events_url": "https://api.github.com/users/hitszxs/events{/privacy}",
"followers_url": "https://api.github.com/users/hitszxs/followers",
"following_url": "https://api.github.com/users/hitszxs/following{/other_user}",
"gists_url": "https... | [] | open | false | null | [] | [
"Hi @hitszxs,\n This is indeed by design,\n\nThe `datasets` library is built on top of [Apache Arrow](https://arrow.apache.org/), which uses a **columnar storage format** with strict schema requirements. When you try to concatenate/merge datasets, the library checks if features can be aligned using the [`_check_if_... | 2025-11-18T08:33:04 | 2025-11-30T03:52:07 | null | NONE | null | null | null | null | Hi, I have a question about SFT (Supervised Fine-tuning) for an agent model.
Suppose I want to fine-tune an agent model that may receive two different tools: tool1 and tool2. These tools have different parameters and types in their schema definitions.
When I try to merge datasets containing different tool definitions... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7869/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7869/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7868 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7868/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7868/comments | https://api.github.com/repos/huggingface/datasets/issues/7868/events | https://github.com/huggingface/datasets/issues/7868 | 3,632,429,308 | I_kwDODunzps7Ygnj8 | 7,868 | Data duplication with `split_dataset_by_node` and `interleaved_dataset` | {
"avatar_url": "https://avatars.githubusercontent.com/u/42485228?v=4",
"events_url": "https://api.github.com/users/ValMystletainn/events{/privacy}",
"followers_url": "https://api.github.com/users/ValMystletainn/followers",
"following_url": "https://api.github.com/users/ValMystletainn/following{/other_user}",
... | [] | open | false | null | [] | [
"Hi @ValMystletainn ,\nCan I be assigned this issue?",
"> split_dataset_by_node\n\nHello, I have some questions about your intended use: (1) It seems unnecessary to use interleaving for a single dataset. (2) For multiple datasets, it seems possible to interleave first and then split by node?"
] | 2025-11-17T09:15:24 | 2025-11-29T03:21:34 | null | NONE | null | null | null | null | ### Describe the bug
Data duplication in different rank, when process a iterabledataset with first `split_dataset_by_node` and then `interleaved_dataset`
### Steps to reproduce the bug
I have provide a minimum scripts
```python
import os
from datasets import interleave_datasets, load_dataset
from datasets.distribu... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7868/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7868/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7867 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7867/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7867/comments | https://api.github.com/repos/huggingface/datasets/issues/7867/events | https://github.com/huggingface/datasets/issues/7867 | 3,620,931,722 | I_kwDODunzps7X0wiK | 7,867 | NonMatchingSplitsSizesError when loading partial dataset files | {
"avatar_url": "https://avatars.githubusercontent.com/u/13678719?v=4",
"events_url": "https://api.github.com/users/QingGo/events{/privacy}",
"followers_url": "https://api.github.com/users/QingGo/followers",
"following_url": "https://api.github.com/users/QingGo/following{/other_user}",
"gists_url": "https://a... | [] | open | false | null | [] | [
"While using verification_mode='no_checks' parameter in load_dataset() can bypass this validation, this solution is not intuitive or convenient for most users, especially those who are not familiar with all the parameters of the load_dataset() function.\n\n```python\nbook_corpus_ds = load_dataset(\n \"SaylorTwif... | 2025-11-13T12:03:23 | 2025-11-16T15:39:23 | null | NONE | null | null | null | null | ### Describe the bug
When loading only a subset of dataset files while the dataset's README.md contains split metadata, the system throws a NonMatchingSplitsSizesError . This prevents users from loading partial datasets for quick validation in cases of poor network conditions or very large datasets.
### Steps to repr... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7867/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7867/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7864 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7864/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7864/comments | https://api.github.com/repos/huggingface/datasets/issues/7864/events | https://github.com/huggingface/datasets/issues/7864 | 3,619,137,823 | I_kwDODunzps7Xt6kf | 7,864 | add_column and add_item erroneously(?) require new_fingerprint parameter | {
"avatar_url": "https://avatars.githubusercontent.com/u/17151810?v=4",
"events_url": "https://api.github.com/users/echthesia/events{/privacy}",
"followers_url": "https://api.github.com/users/echthesia/followers",
"following_url": "https://api.github.com/users/echthesia/following{/other_user}",
"gists_url": "... | [] | open | false | null | [] | [
"Take this with a grain of salt, this is just my personal understanding:\nWhile you technically can overwrite the new_fingerprint with a string, e.g.\n```python\nt = d.add_column(\"new_column\", col_value, new_fingerprint=\"dummy_fp\")\nassert t._fingerprint == \"dummy_fp\" # this is true and will pass\n```\nthis ... | 2025-11-13T02:56:49 | 2025-11-24T20:33:59 | null | NONE | null | null | null | null | ### Describe the bug
Contradicting their documentation (which doesn't mention the parameter at all), both Dataset.add_column and Dataset.add_item require a new_fingerprint string. This parameter is passed directly to the dataset constructor, which has the fingerprint parameter listed as optional; is there any reason i... | null | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7864/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7864/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7863 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7863/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7863/comments | https://api.github.com/repos/huggingface/datasets/issues/7863/events | https://github.com/huggingface/datasets/issues/7863 | 3,618,836,821 | I_kwDODunzps7XsxFV | 7,863 | Support hosting lance / vortex / iceberg / zarr datasets on huggingface hub | {
"avatar_url": "https://avatars.githubusercontent.com/u/3664715?v=4",
"events_url": "https://api.github.com/users/pavanramkumar/events{/privacy}",
"followers_url": "https://api.github.com/users/pavanramkumar/followers",
"following_url": "https://api.github.com/users/pavanramkumar/following{/other_user}",
"gi... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | [
"Kudos!",
"So cool ! Would love to see support for lance :)",
"@lhoestq thanks for your support! Any suggestions across `datasets` or `huggingface_hub` projects to make this happen?\n\nI just noticed this blog post: https://huggingface.co/blog/streaming-datasets\n\nDo you know if `hfFileSystem` from `huggingfac... | 2025-11-13T00:51:07 | 2025-11-26T14:10:29 | null | NONE | null | null | null | null | ### Feature request
Huggingface datasets has great support for large tabular datasets in parquet with large partitions. I would love to see two things in the future:
- equivalent support for `lance`, `vortex`, `iceberg`, `zarr` (in that order) in a way that I can stream them using the datasets library
- more fine-gr... | null | {
"+1": 4,
"-1": 0,
"confused": 0,
"eyes": 2,
"heart": 5,
"hooray": 2,
"laugh": 2,
"rocket": 8,
"total_count": 23,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7863/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7863/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7861 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7861/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7861/comments | https://api.github.com/repos/huggingface/datasets/issues/7861/events | https://github.com/huggingface/datasets/issues/7861 | 3,611,821,713 | I_kwDODunzps7XSAaR | 7,861 | Performance Issue: save_to_disk() 200-1200% slower due to unconditional flatten_indices() | {
"avatar_url": "https://avatars.githubusercontent.com/u/222552287?v=4",
"events_url": "https://api.github.com/users/KCKawalkar/events{/privacy}",
"followers_url": "https://api.github.com/users/KCKawalkar/followers",
"following_url": "https://api.github.com/users/KCKawalkar/following{/other_user}",
"gists_url... | [] | open | false | null | [] | [] | 2025-11-11T11:05:38 | 2025-11-11T11:05:38 | null | NONE | null | null | null | null | ## 🐛 Bug Description
The `save_to_disk()` method unconditionally calls `flatten_indices()` when `_indices` is not None, causing severe performance degradation for datasets processed with filtering, shuffling, or multiprocessed mapping operations.
**Root cause**: This line rebuilds the entire dataset unnecessarily:
`... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7861/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7861/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7856 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7856/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7856/comments | https://api.github.com/repos/huggingface/datasets/issues/7856/events | https://github.com/huggingface/datasets/issues/7856 | 3,603,729,142 | I_kwDODunzps7WzIr2 | 7,856 | Missing transcript column when loading a local dataset with "audiofolder" | {
"avatar_url": "https://avatars.githubusercontent.com/u/10166907?v=4",
"events_url": "https://api.github.com/users/gweltou/events{/privacy}",
"followers_url": "https://api.github.com/users/gweltou/followers",
"following_url": "https://api.github.com/users/gweltou/following{/other_user}",
"gists_url": "https:... | [] | closed | false | null | [] | [
"First bad commit 5c8869f8c36dbc8c8d423030b7b7c4fd64f8c729\n\nEDIT: This is not a bug or a regression. It was a breaking change introduced in the commit I mentioned and was also documented in there. The docs state how to handle this now, see https://huggingface.co/docs/datasets/main/en/audio_load#audiofolder-with-m... | 2025-11-08T16:27:58 | 2025-11-09T12:13:38 | 2025-11-09T12:13:38 | NONE | null | null | null | null | ### Describe the bug
My local dataset is not properly loaded when using `load_dataset("audiofolder", data_dir="my_dataset")` with a `jsonl` metadata file.
Only the `audio` column is read while the `transcript` column is not.
The last tested `datasets` version where the behavior was still correct is 2.18.0.
### Steps... | {
"avatar_url": "https://avatars.githubusercontent.com/u/10166907?v=4",
"events_url": "https://api.github.com/users/gweltou/events{/privacy}",
"followers_url": "https://api.github.com/users/gweltou/followers",
"following_url": "https://api.github.com/users/gweltou/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7856/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7856/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 19:45:40 |
https://api.github.com/repos/huggingface/datasets/issues/7852 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7852/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7852/comments | https://api.github.com/repos/huggingface/datasets/issues/7852/events | https://github.com/huggingface/datasets/issues/7852 | 3,595,450,602 | I_kwDODunzps7WTjjq | 7,852 | Problems with NifTI | {
"avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4",
"events_url": "https://api.github.com/users/CloseChoice/events{/privacy}",
"followers_url": "https://api.github.com/users/CloseChoice/followers",
"following_url": "https://api.github.com/users/CloseChoice/following{/other_user}",
"gists_u... | [] | closed | false | null | [] | [
"> 2. when uploading via the niftifolder feature, the resulting parquet only contains relative paths to the nifti files:\n\nwhat did you use to upload the dataset ? iirc push_to_hub() does upload the bytes as well, but to_parquet() doesn't",
"> > 2. when uploading via the niftifolder feature, the resulting parque... | 2025-11-06T11:46:33 | 2025-11-06T16:20:38 | 2025-11-06T16:20:38 | CONTRIBUTOR | null | null | null | null | ### Describe the bug
There are currently 2 problems with the new NifTI feature:
1. dealing with zipped files, this is mentioned and explained [here](https://github.com/huggingface/datasets/pull/7815#issuecomment-3496199503)
2. when uploading via the `niftifolder` feature, the resulting parquet only contains relative p... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7852/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7852/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 4:34:05 |
https://api.github.com/repos/huggingface/datasets/issues/7842 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7842/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7842/comments | https://api.github.com/repos/huggingface/datasets/issues/7842/events | https://github.com/huggingface/datasets/issues/7842 | 3,582,182,995 | I_kwDODunzps7Vg8ZT | 7,842 | Transform with columns parameter triggers on non-specified column access | {
"avatar_url": "https://avatars.githubusercontent.com/u/18426892?v=4",
"events_url": "https://api.github.com/users/mr-brobot/events{/privacy}",
"followers_url": "https://api.github.com/users/mr-brobot/followers",
"following_url": "https://api.github.com/users/mr-brobot/following{/other_user}",
"gists_url": "... | [] | closed | false | null | [] | [] | 2025-11-03T13:55:27 | 2025-11-03T14:34:13 | 2025-11-03T14:34:13 | NONE | null | null | null | null | ### Describe the bug
Iterating over a [`Column`](https://github.com/huggingface/datasets/blob/8b1bd4ec1cc9e9ce022f749abb6485ef984ae7c0/src/datasets/arrow_dataset.py#L633-L692) iterates through the parent [`Dataset`](https://github.com/huggingface/datasets/blob/8b1bd4ec1cc9e9ce022f749abb6485ef984ae7c0/src/datasets/arro... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7842/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7842/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 0:38:46 |
https://api.github.com/repos/huggingface/datasets/issues/7841 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7841/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7841/comments | https://api.github.com/repos/huggingface/datasets/issues/7841/events | https://github.com/huggingface/datasets/issues/7841 | 3,579,506,747 | I_kwDODunzps7VWvA7 | 7,841 | DOC: `mode` parameter on pdf and video features unused | {
"avatar_url": "https://avatars.githubusercontent.com/u/31857876?v=4",
"events_url": "https://api.github.com/users/CloseChoice/events{/privacy}",
"followers_url": "https://api.github.com/users/CloseChoice/followers",
"following_url": "https://api.github.com/users/CloseChoice/following{/other_user}",
"gists_u... | [] | closed | false | null | [] | [
"They seem to be artefacts from a copy-paste of the Image feature ^^' we should remove them"
] | 2025-11-02T12:37:47 | 2025-11-05T14:04:04 | 2025-11-05T14:04:04 | CONTRIBUTOR | null | null | null | null | Following up on https://github.com/huggingface/datasets/pull/7840 I asked claude code to check for undocumented parameters for other features and it found:
- mode parameter on video is documented but unused: https://github.com/huggingface/datasets/blob/main/src/datasets/features/video.py#L48-L49
- the same goes for the... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7841/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7841/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 3 days, 1:26:17 |
https://api.github.com/repos/huggingface/datasets/issues/7839 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7839/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7839/comments | https://api.github.com/repos/huggingface/datasets/issues/7839/events | https://github.com/huggingface/datasets/issues/7839 | 3,579,121,843 | I_kwDODunzps7VVRCz | 7,839 | datasets doesn't work with python 3.14 | {
"avatar_url": "https://avatars.githubusercontent.com/u/4789087?v=4",
"events_url": "https://api.github.com/users/zachmoshe/events{/privacy}",
"followers_url": "https://api.github.com/users/zachmoshe/followers",
"following_url": "https://api.github.com/users/zachmoshe/following{/other_user}",
"gists_url": "h... | [] | closed | false | null | [] | [
"Thanks for the report.\nHave you tried on main? This should work, there was recently a PR merged to address this problem, see #7817",
"Works on main 👍 \nWhat's the release schedule for `datasets`? Seems like a cadence of ~2weeks so I assume a real version is due pretty soon?",
"let's say we do a new release l... | 2025-11-02T09:09:06 | 2025-11-04T14:02:25 | 2025-11-04T14:02:25 | NONE | null | null | null | null | ### Describe the bug
Seems that `dataset` doesn't work with python==3.14. The root cause seems to be something with a `deel` API that was changed.
```
TypeError: Pickler._batch_setitems() takes 2 positional arguments but 3 were given
```
### Steps to reproduce the bug
(on a new folder)
uv init
uv python pin 3.14
uv... | {
"avatar_url": "https://avatars.githubusercontent.com/u/4789087?v=4",
"events_url": "https://api.github.com/users/zachmoshe/events{/privacy}",
"followers_url": "https://api.github.com/users/zachmoshe/followers",
"following_url": "https://api.github.com/users/zachmoshe/following{/other_user}",
"gists_url": "h... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7839/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7839/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 2 days, 4:53:19 |
https://api.github.com/repos/huggingface/datasets/issues/7837 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7837/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7837/comments | https://api.github.com/repos/huggingface/datasets/issues/7837/events | https://github.com/huggingface/datasets/issues/7837 | 3,575,454,726 | I_kwDODunzps7VHRwG | 7,837 | mono parameter to the Audio feature is missing | {
"avatar_url": "https://avatars.githubusercontent.com/u/1250234?v=4",
"events_url": "https://api.github.com/users/ernestum/events{/privacy}",
"followers_url": "https://api.github.com/users/ernestum/followers",
"following_url": "https://api.github.com/users/ernestum/following{/other_user}",
"gists_url": "http... | [] | closed | false | null | [] | [
"Hey, we removed the misleading passage in the docstring and enabled support for `num_channels` as torchcodec does",
"thanks!"
] | 2025-10-31T15:41:39 | 2025-11-03T15:59:18 | 2025-11-03T14:24:12 | NONE | null | null | null | null | According to the docs, there is a "mono" parameter to the Audio feature, which turns any stereo into mono. In practice the signal is not touched and the mono parameter, even though documented, does not exist.
https://github.com/huggingface/datasets/blob/41c05299348a499807432ab476e1cdc4143c8772/src/datasets/features/a... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7837/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7837/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 2 days, 22:42:33 |
https://api.github.com/repos/huggingface/datasets/issues/7834 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7834/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7834/comments | https://api.github.com/repos/huggingface/datasets/issues/7834/events | https://github.com/huggingface/datasets/issues/7834 | 3,558,802,959 | I_kwDODunzps7UHwYP | 7,834 | Audio.cast_column() or Audio.decode_example() causes Colab kernel crash (std::bad_alloc) | {
"avatar_url": "https://avatars.githubusercontent.com/u/2559570?v=4",
"events_url": "https://api.github.com/users/rachidio/events{/privacy}",
"followers_url": "https://api.github.com/users/rachidio/followers",
"following_url": "https://api.github.com/users/rachidio/following{/other_user}",
"gists_url": "http... | [] | open | false | null | [] | [
"Hi ! `datasets` v4 uses `torchcodec` for audio decoding (previous versions were using `soundfile`). What is your `torchcodec` version ? Can you try other versions of `torchcodec` and see if it works ?",
"When I install `datasets` with `pip install datasets[audio]` it install this version of `torchcodec`:\n```\nN... | 2025-10-27T22:02:00 | 2025-11-15T16:28:04 | null | NONE | null | null | null | null | ### Describe the bug
When using the huggingface datasets.Audio feature to decode a local or remote (public HF dataset) audio file inside Google Colab, the notebook kernel crashes with std::bad_alloc (C++ memory allocation failure).
The crash happens even with a minimal code example and valid .wav file that can be read... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7834/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7834/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7832 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7832/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7832/comments | https://api.github.com/repos/huggingface/datasets/issues/7832/events | https://github.com/huggingface/datasets/issues/7832 | 3,555,991,552 | I_kwDODunzps7T9CAA | 7,832 | [DOCS][minor] TIPS paragraph not compiled in docs/stream | {
"avatar_url": "https://avatars.githubusercontent.com/u/110672812?v=4",
"events_url": "https://api.github.com/users/art-test-stack/events{/privacy}",
"followers_url": "https://api.github.com/users/art-test-stack/followers",
"following_url": "https://api.github.com/users/art-test-stack/following{/other_user}",
... | [] | closed | false | null | [] | [] | 2025-10-27T10:03:22 | 2025-10-27T10:10:54 | 2025-10-27T10:10:54 | CONTRIBUTOR | null | null | null | null | In the client documentation, the markdown 'TIP' paragraph for paragraph in docs/stream#shuffle is not well executed — not as the other in the same page / while markdown is correctly considering it.
Documentation:
https://huggingface.co/docs/datasets/v4.3.0/en/stream#shuffle:~:text=%5B!TIP%5D%5BIterableDataset.shuffle(... | {
"avatar_url": "https://avatars.githubusercontent.com/u/110672812?v=4",
"events_url": "https://api.github.com/users/art-test-stack/events{/privacy}",
"followers_url": "https://api.github.com/users/art-test-stack/followers",
"following_url": "https://api.github.com/users/art-test-stack/following{/other_user}",
... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7832/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7832/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 0:07:32 |
https://api.github.com/repos/huggingface/datasets/issues/7829 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7829/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7829/comments | https://api.github.com/repos/huggingface/datasets/issues/7829/events | https://github.com/huggingface/datasets/issues/7829 | 3,548,584,085 | I_kwDODunzps7TgxiV | 7,829 | Memory leak / Large memory usage with num_workers = 0 and numerous dataset within DatasetDict | {
"avatar_url": "https://avatars.githubusercontent.com/u/24591024?v=4",
"events_url": "https://api.github.com/users/raphaelsty/events{/privacy}",
"followers_url": "https://api.github.com/users/raphaelsty/followers",
"following_url": "https://api.github.com/users/raphaelsty/following{/other_user}",
"gists_url"... | [] | open | false | null | [] | [
"Thanks for the report, this is possibly related #7722 and #7694.\n\nCould you pls provide steps to reproduce this?",
"To overcome this issue right now I did simply reduce the size of the dataset and ended up running a for loop (my training has now a constant learning rate schedule). From what I understood, and I... | 2025-10-24T09:51:38 | 2025-11-06T13:31:26 | null | NONE | null | null | null | null | ### Describe the bug
Hi team, first off, I love the datasets library! 🥰
I'm encountering a potential memory leak / increasing memory usage when training a model on a very large DatasetDict.
Setup: I have a DatasetDict containing 362 distinct datasets, which sum up to ~2.8 billion rows.
Training Task: I'm performin... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 1,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7829/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7829/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7821 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7821/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7821/comments | https://api.github.com/repos/huggingface/datasets/issues/7821/events | https://github.com/huggingface/datasets/issues/7821 | 3,520,913,195 | I_kwDODunzps7R3N8r | 7,821 | Building a dataset with large variable size arrays results in error ArrowInvalid: Value X too large to fit in C integer type | {
"avatar_url": "https://avatars.githubusercontent.com/u/51880718?v=4",
"events_url": "https://api.github.com/users/kkoutini/events{/privacy}",
"followers_url": "https://api.github.com/users/kkoutini/followers",
"following_url": "https://api.github.com/users/kkoutini/following{/other_user}",
"gists_url": "htt... | [] | open | false | null | [] | [
"Thanks for reporting ! You can fix this by specifying the output type explicitly and use `LargeList` which uses int64 for offsets:\n\n```python\nfeatures = Features({\"audio\": LargeList(Value(\"uint16\"))})\nds = ds.map(..., features=features)\n```\n\nIt would be cool to improve `list_of_pa_arrays_to_pyarrow_list... | 2025-10-16T08:45:17 | 2025-10-20T13:42:05 | null | CONTRIBUTOR | null | null | null | null | ### Describe the bug
I used map to store raw audio waveforms of variable lengths in a column of a dataset the `map` call fails with ArrowInvalid: Value X too large to fit in C integer type.
```
Traceback (most recent call last):
Traceback (most recent call last):
File "...lib/python3.12/site-packages/multiprocess/p... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7821/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7821/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7819 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7819/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7819/comments | https://api.github.com/repos/huggingface/datasets/issues/7819/events | https://github.com/huggingface/datasets/issues/7819 | 3,517,086,110 | I_kwDODunzps7Ronme | 7,819 | Cannot download opus dataset | {
"avatar_url": "https://avatars.githubusercontent.com/u/51946663?v=4",
"events_url": "https://api.github.com/users/liamsun2019/events{/privacy}",
"followers_url": "https://api.github.com/users/liamsun2019/followers",
"following_url": "https://api.github.com/users/liamsun2019/following{/other_user}",
"gists_u... | [] | open | false | null | [] | [
"Hi ! it seems \"en-zh\" doesn't exist for this dataset\n\nYou can see the list of subsets here: https://huggingface.co/datasets/Helsinki-NLP/opus_books"
] | 2025-10-15T09:06:19 | 2025-10-20T13:45:16 | null | NONE | null | null | null | null | When I tried to download opus_books using:
from datasets import load_dataset
dataset = load_dataset("Helsinki-NLP/opus_books")
I got the following errors:
FileNotFoundError: Couldn't find any data file at /workspace/Helsinki-NLP/opus_books. Couldn't find 'Helsinki-NLP/opus_books' on the Hugging Face Hub either: Local... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7819/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7819/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7818 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7818/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7818/comments | https://api.github.com/repos/huggingface/datasets/issues/7818/events | https://github.com/huggingface/datasets/issues/7818 | 3,515,887,618 | I_kwDODunzps7RkDAC | 7,818 | train_test_split and stratify breaks with Numpy 2.0 | {
"avatar_url": "https://avatars.githubusercontent.com/u/24845694?v=4",
"events_url": "https://api.github.com/users/davebulaval/events{/privacy}",
"followers_url": "https://api.github.com/users/davebulaval/followers",
"following_url": "https://api.github.com/users/davebulaval/following{/other_user}",
"gists_u... | [] | closed | false | null | [] | [
"I can't reproduce this. Could you pls provide an example with a public dataset/artificial dataset and show how you loaded that?\n\nThis works for me:\n\n```python\nimport numpy as np\nfrom datasets import Dataset, Features, ClassLabel, Value\n\ndata = {\"text\": [f\"sample_{i}\" for i in range(100)], \"label\": [i... | 2025-10-15T00:01:19 | 2025-10-28T16:10:44 | 2025-10-28T16:10:44 | NONE | null | null | null | null | ### Describe the bug
As stated in the title, since Numpy changed in version >2.0 with copy, the stratify parameters break.
e.g. `all_dataset.train_test_split(test_size=0.2,stratify_by_column="label")` returns a Numpy error.
It works if you downgrade Numpy to a version lower than 2.0.
### Steps to reproduce the bug
... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7818/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7818/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 13 days, 16:09:25 |
https://api.github.com/repos/huggingface/datasets/issues/7816 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7816/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7816/comments | https://api.github.com/repos/huggingface/datasets/issues/7816/events | https://github.com/huggingface/datasets/issues/7816 | 3,512,210,206 | I_kwDODunzps7RWBMe | 7,816 | disable_progress_bar() not working as expected | {
"avatar_url": "https://avatars.githubusercontent.com/u/5577741?v=4",
"events_url": "https://api.github.com/users/windmaple/events{/privacy}",
"followers_url": "https://api.github.com/users/windmaple/followers",
"following_url": "https://api.github.com/users/windmaple/following{/other_user}",
"gists_url": "h... | [] | closed | false | null | [] | [
"@xianbaoqian ",
"Closing this one since it's a Xet issue."
] | 2025-10-14T03:25:39 | 2025-10-14T23:49:26 | 2025-10-14T23:49:26 | NONE | null | null | null | null | ### Describe the bug
Hi,
I'm trying to load a dataset on Kaggle TPU image. There is some known compat issue with progress bar on Kaggle, so I'm trying to disable the progress bar globally. This does not work as you can see in [here](https://www.kaggle.com/code/windmaple/hf-datasets-issue).
In contract, disabling pro... | {
"avatar_url": "https://avatars.githubusercontent.com/u/5577741?v=4",
"events_url": "https://api.github.com/users/windmaple/events{/privacy}",
"followers_url": "https://api.github.com/users/windmaple/followers",
"following_url": "https://api.github.com/users/windmaple/following{/other_user}",
"gists_url": "h... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7816/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7816/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 20:23:47 |
https://api.github.com/repos/huggingface/datasets/issues/7813 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7813/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7813/comments | https://api.github.com/repos/huggingface/datasets/issues/7813/events | https://github.com/huggingface/datasets/issues/7813 | 3,503,446,288 | I_kwDODunzps7Q0lkQ | 7,813 | Caching does not work when using python3.14 | {
"avatar_url": "https://avatars.githubusercontent.com/u/142020129?v=4",
"events_url": "https://api.github.com/users/intexcor/events{/privacy}",
"followers_url": "https://api.github.com/users/intexcor/followers",
"following_url": "https://api.github.com/users/intexcor/following{/other_user}",
"gists_url": "ht... | [] | closed | false | null | [] | [
"https://github.com/uqfoundation/dill/issues/725",
"@intexcor does #7817 fix your problem?"
] | 2025-10-10T15:36:46 | 2025-10-27T17:08:26 | 2025-10-27T17:08:26 | NONE | null | null | null | null | ### Describe the bug
Traceback (most recent call last):
File "/workspace/ctn.py", line 8, in <module>
ds = load_dataset(f"naver-clova-ix/synthdog-{lang}") # или "synthdog-zh" для китайского
File "/workspace/.venv/lib/python3.14/site-packages/datasets/load.py", line 1397, in load_dataset
builder_instance =... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7813/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7813/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 17 days, 1:31:40 |
https://api.github.com/repos/huggingface/datasets/issues/7811 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7811/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7811/comments | https://api.github.com/repos/huggingface/datasets/issues/7811/events | https://github.com/huggingface/datasets/issues/7811 | 3,500,741,658 | I_kwDODunzps7QqRQa | 7,811 | SIGSEGV when Python exits due to near null deref | {
"avatar_url": "https://avatars.githubusercontent.com/u/5192353?v=4",
"events_url": "https://api.github.com/users/iankronquist/events{/privacy}",
"followers_url": "https://api.github.com/users/iankronquist/followers",
"following_url": "https://api.github.com/users/iankronquist/following{/other_user}",
"gists... | [] | open | false | null | [] | [
"The issue seems to come from `dill` which is a `datasets` dependency, e.g. this segfaults:\n\n```python\nimport dill\nfrom tqdm import tqdm\nprogress_bar = tqdm(total=(1000), unit='cols', desc='cols ')\nprogress_bar.update(1)\n```\n\n`tqdm` seems to segfault when `dill` is imported. I only found this about segfaul... | 2025-10-09T22:00:11 | 2025-10-10T22:09:24 | null | NONE | null | null | null | null | ### Describe the bug
When I run the following python script using datasets I get a segfault.
```python
from datasets import load_dataset
from tqdm import tqdm
progress_bar = tqdm(total=(1000), unit='cols', desc='cols ')
progress_bar.update(1)
```
```
% lldb -- python3 crashmin.py
(lldb) target create "python3"
Cur... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7811/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7811/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7804 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7804/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7804/comments | https://api.github.com/repos/huggingface/datasets/issues/7804/events | https://github.com/huggingface/datasets/issues/7804 | 3,498,534,596 | I_kwDODunzps7Qh2bE | 7,804 | Support scientific data formats | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | [] | open | false | null | [] | [
"Please add the support for `Zarr`! That's what we use in the Bioimaging community. It is crucial, because raw upload of a *single* bio image can take _terrabytes in memory_!\n\nThe python library would be `bioio` or `zarr`:\n- [ ] Zarr: `bioio` or `zarr`\n\nSee a [Zarr example](https://ome.github.io/ome-ngff-valid... | 2025-10-09T10:18:24 | 2025-11-26T16:09:43 | null | MEMBER | null | null | null | null | List of formats and libraries we can use to load the data in `datasets`:
- [ ] DICOMs: pydicom
- [x] NIfTIs: nibabel
- [ ] WFDB: wfdb
cc @zaRizk7 for viz
Feel free to comment / suggest other formats and libs you'd like to see or to share your interest in one of the mentioned format | null | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 5,
"hooray": 4,
"laugh": 0,
"rocket": 0,
"total_count": 10,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7804/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7804/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7802 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7802/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7802/comments | https://api.github.com/repos/huggingface/datasets/issues/7802/events | https://github.com/huggingface/datasets/issues/7802 | 3,497,454,119 | I_kwDODunzps7Qduon | 7,802 | [Docs] Missing documentation for `Dataset.from_dict` | {
"avatar_url": "https://avatars.githubusercontent.com/u/69421545?v=4",
"events_url": "https://api.github.com/users/aaronshenhao/events{/privacy}",
"followers_url": "https://api.github.com/users/aaronshenhao/followers",
"following_url": "https://api.github.com/users/aaronshenhao/following{/other_user}",
"gist... | [] | open | false | null | [] | [
"I'd like to work on this documentation issue.",
"Hi I'd like to work on this. I can see the docstring is already in the code. \nCould you confirm:\n1. Is this still available?\n2. Should I add this to the main_classes.md file, or is there a specific \n documentation file I should update?\n3. Are there any form... | 2025-10-09T02:54:41 | 2025-10-19T16:09:33 | null | NONE | null | null | null | null | Documentation link: https://huggingface.co/docs/datasets/en/package_reference/main_classes
Link to method (docstring present): https://github.com/huggingface/datasets/blob/6f2502c5a026caa89839713f6f7c8b958e5e83eb/src/datasets/arrow_dataset.py#L1029
The docstring is present for the function, but seems missing from the... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7802/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7802/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7798 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7798/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7798/comments | https://api.github.com/repos/huggingface/datasets/issues/7798/events | https://github.com/huggingface/datasets/issues/7798 | 3,484,470,782 | I_kwDODunzps7PsM3- | 7,798 | Audio dataset is not decoding on 4.1.1 | {
"avatar_url": "https://avatars.githubusercontent.com/u/61390950?v=4",
"events_url": "https://api.github.com/users/thewh1teagle/events{/privacy}",
"followers_url": "https://api.github.com/users/thewh1teagle/followers",
"following_url": "https://api.github.com/users/thewh1teagle/following{/other_user}",
"gist... | [] | open | false | null | [] | [
"Previously (datasets<=3.6.0), audio columns were decoded automatically when accessing a row. Now, for performance reasons, audio decoding is lazy by default: you just see the file path unless you explicitly cast the column to Audio.\n\nHere’s the fix (following the current [datasets audio docs](https://huggingface... | 2025-10-05T06:37:50 | 2025-10-06T14:07:55 | null | NONE | null | null | null | null | ### Describe the bug
The audio column remain as non-decoded objects even when accessing them.
```python
dataset = load_dataset("MrDragonFox/Elise", split = "train")
dataset[0] # see that it doesn't show 'array' etc...
```
Works fine with `datasets==3.6.0`
Followed the docs in
- https://huggingface.co/docs/dataset... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7798/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7798/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7793 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7793/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7793/comments | https://api.github.com/repos/huggingface/datasets/issues/7793/events | https://github.com/huggingface/datasets/issues/7793 | 3,459,496,971 | I_kwDODunzps7OM7wL | 7,793 | Cannot load dataset, fails with nested data conversions not implemented for chunked array outputs | {
"avatar_url": "https://avatars.githubusercontent.com/u/41182432?v=4",
"events_url": "https://api.github.com/users/neevparikh/events{/privacy}",
"followers_url": "https://api.github.com/users/neevparikh/followers",
"following_url": "https://api.github.com/users/neevparikh/following{/other_user}",
"gists_url"... | [] | open | false | null | [] | [
"Hey @neevparikh,\nThanks for reporting this! I can reproduce the issue and have identified the root cause.\nProblem: The metr-evals/malt-public dataset contains deeply nested conversation data that exceeds PyArrow's 16MB chunk limit. When PyArrow tries to read it in chunks, it hits a fundamental limitation: \"Nest... | 2025-09-27T01:03:12 | 2025-09-27T21:35:31 | null | NONE | null | null | null | null | ### Describe the bug
Hi! When I load this dataset, it fails with a pyarrow error. I'm using datasets 4.1.1, though I also see this with datasets 4.1.2
To reproduce:
```
import datasets
ds = datasets.load_dataset(path="metr-evals/malt-public", name="irrelevant_detail")
```
Error:
```
Traceback (most recent call las... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7793/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7793/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7792 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7792/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7792/comments | https://api.github.com/repos/huggingface/datasets/issues/7792/events | https://github.com/huggingface/datasets/issues/7792 | 3,456,802,210 | I_kwDODunzps7OCp2i | 7,792 | Concatenate IterableDataset instances and distribute underlying shards in a RoundRobin manner | {
"avatar_url": "https://avatars.githubusercontent.com/u/13559010?v=4",
"events_url": "https://api.github.com/users/LTMeyer/events{/privacy}",
"followers_url": "https://api.github.com/users/LTMeyer/followers",
"following_url": "https://api.github.com/users/LTMeyer/following{/other_user}",
"gists_url": "https:... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | [
"# With `datasets.Dataset`\n\nHere is an small script that shows the distribution differences of samples between `interleave_datasets`, `concatenate_datasets` and `concatenate_datasets` + shuffling.\n\n```python\nimport datasets as hf_datasets\n\ndef gen(dataset: int, n_samples: int):\n for i in range(n_samples)... | 2025-09-26T10:05:19 | 2025-10-15T18:05:23 | 2025-10-15T18:05:23 | NONE | null | null | null | null | ### Feature request
I would like to be able to concatenate multiple `IterableDataset` with possibly different features. I would like to then be able to stream the results in parallel (both using DDP and multiple workers in the pytorch DataLoader). I want the merge of datasets to be well balanced between the different ... | {
"avatar_url": "https://avatars.githubusercontent.com/u/13559010?v=4",
"events_url": "https://api.github.com/users/LTMeyer/events{/privacy}",
"followers_url": "https://api.github.com/users/LTMeyer/followers",
"following_url": "https://api.github.com/users/LTMeyer/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7792/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7792/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 19 days, 8:00:04 |
https://api.github.com/repos/huggingface/datasets/issues/7788 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7788/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7788/comments | https://api.github.com/repos/huggingface/datasets/issues/7788/events | https://github.com/huggingface/datasets/issues/7788 | 3,450,913,796 | I_kwDODunzps7NsMQE | 7,788 | `Dataset.to_sql` doesn't utilize `num_proc` | {
"avatar_url": "https://avatars.githubusercontent.com/u/30357072?v=4",
"events_url": "https://api.github.com/users/tcsmaster/events{/privacy}",
"followers_url": "https://api.github.com/users/tcsmaster/followers",
"following_url": "https://api.github.com/users/tcsmaster/following{/other_user}",
"gists_url": "... | [] | open | false | null | [] | [] | 2025-09-24T20:34:47 | 2025-09-24T20:35:01 | null | NONE | null | null | null | null | The underlying `SqlDatasetWriter` has `num_proc` as an available argument [here](https://github.com/huggingface/datasets/blob/5dc1a179783dff868b0547c8486268cfaea1ea1f/src/datasets/io/sql.py#L63) , but `Dataset.to_sql()` does not accept it, therefore it is always using one process for the SQL conversion. | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7788/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7788/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7780 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7780/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7780/comments | https://api.github.com/repos/huggingface/datasets/issues/7780/events | https://github.com/huggingface/datasets/issues/7780 | 3,429,267,259 | I_kwDODunzps7MZnc7 | 7,780 | BIGPATENT dataset inaccessible (deprecated script loader) | {
"avatar_url": "https://avatars.githubusercontent.com/u/137755081?v=4",
"events_url": "https://api.github.com/users/ishmaifan/events{/privacy}",
"followers_url": "https://api.github.com/users/ishmaifan/followers",
"following_url": "https://api.github.com/users/ishmaifan/following{/other_user}",
"gists_url": ... | [] | closed | false | null | [] | [
"Hi ! I opened https://huggingface.co/datasets/NortheasternUniversity/big_patent/discussions/7 to update the dataset, hopefully it's merged soon !",
"The dataset now works with `datasets` v4 ! closing this issue"
] | 2025-09-18T08:25:34 | 2025-09-25T14:36:13 | 2025-09-25T14:36:13 | NONE | null | null | null | null | dataset: https://huggingface.co/datasets/NortheasternUniversity/big_patent
When I try to load it with the datasets library, it fails with:
RuntimeError: Dataset scripts are no longer supported, but found big_patent.py
Could you please publish a Parquet/Arrow export of BIGPATENT on the Hugging Face so that it can be... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7780/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7780/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 7 days, 6:10:39 |
https://api.github.com/repos/huggingface/datasets/issues/7777 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7777/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7777/comments | https://api.github.com/repos/huggingface/datasets/issues/7777/events | https://github.com/huggingface/datasets/issues/7777 | 3,424,462,082 | I_kwDODunzps7MHSUC | 7,777 | push_to_hub not overwriting but stuck in a loop when there are existing commits | {
"avatar_url": "https://avatars.githubusercontent.com/u/55143337?v=4",
"events_url": "https://api.github.com/users/Darejkal/events{/privacy}",
"followers_url": "https://api.github.com/users/Darejkal/followers",
"following_url": "https://api.github.com/users/Darejkal/following{/other_user}",
"gists_url": "htt... | [] | closed | false | null | [] | [
"HTTP 412 means a commit happened in the meantime, so `get_deletions_and_dataset_card` has to retry to get the latest version of the dataset card and what files to delete based on the latest version of the dataset repository\n\nAre you running other operations in the dataset repo for your push_to_hub ?",
"There w... | 2025-09-17T03:15:35 | 2025-09-17T19:31:14 | 2025-09-17T19:31:14 | NONE | null | null | null | null | ### Describe the bug
`get_deletions_and_dataset_card` stuck at error a commit has happened error since push to hub for http error 412 for tag 4.1.0. The error does not exists in 4.0.0.
### Steps to reproduce the bug
Create code to use push_to_hub, ran twice each time with different content for datasets.Dataset.
The... | {
"avatar_url": "https://avatars.githubusercontent.com/u/55143337?v=4",
"events_url": "https://api.github.com/users/Darejkal/events{/privacy}",
"followers_url": "https://api.github.com/users/Darejkal/followers",
"following_url": "https://api.github.com/users/Darejkal/following{/other_user}",
"gists_url": "htt... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7777/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7777/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 16:15:39 |
https://api.github.com/repos/huggingface/datasets/issues/7772 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7772/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7772/comments | https://api.github.com/repos/huggingface/datasets/issues/7772/events | https://github.com/huggingface/datasets/issues/7772 | 3,417,353,751 | I_kwDODunzps7LsK4X | 7,772 | Error processing scalar columns using tensorflow. | {
"avatar_url": "https://avatars.githubusercontent.com/u/3871483?v=4",
"events_url": "https://api.github.com/users/khteh/events{/privacy}",
"followers_url": "https://api.github.com/users/khteh/followers",
"following_url": "https://api.github.com/users/khteh/following{/other_user}",
"gists_url": "https://api.g... | [] | open | false | null | [] | [
"Using tf.convert_to_tensor works fine:\n\n```\nimport tensorflow as tf\n\nstart_pos = tf.convert_to_tensor(train_ds['start_positions'], dtype=tf.int64)\nstart_pos = tf.reshape(start_pos, [-1, 1])\n```\n\n\nAlternatively, using the built-in to_tf_dataset also avoids the issue:\n\n```\ntrain_tf = train_ds.to_tf_data... | 2025-09-15T10:36:31 | 2025-09-27T08:22:44 | null | NONE | null | null | null | null | `datasets==4.0.0`
```
columns_to_return = ['input_ids','attention_mask', 'start_positions', 'end_positions']
train_ds.set_format(type='tf', columns=columns_to_return)
```
`train_ds`:
```
train_ds type: <class 'datasets.arrow_dataset.Dataset'>, shape: (1000, 9)
columns: ['question', 'sentences', 'answer', 'str_idx', 'en... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7772/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7772/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7767 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7767/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7767/comments | https://api.github.com/repos/huggingface/datasets/issues/7767/events | https://github.com/huggingface/datasets/issues/7767 | 3,411,654,444 | I_kwDODunzps7LWbcs | 7,767 | Custom `dl_manager` in `load_dataset` | {
"avatar_url": "https://avatars.githubusercontent.com/u/13214530?v=4",
"events_url": "https://api.github.com/users/ain-soph/events{/privacy}",
"followers_url": "https://api.github.com/users/ain-soph/followers",
"following_url": "https://api.github.com/users/ain-soph/following{/other_user}",
"gists_url": "htt... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | [] | 2025-09-12T19:06:23 | 2025-09-12T19:07:52 | null | NONE | null | null | null | null | ### Feature request
https://github.com/huggingface/datasets/blob/4.0.0/src/datasets/load.py#L1411-L1418
```
def load_dataset(
...
dl_manager: Optional[DownloadManager] = None, # add this new argument
**config_kwargs,
) -> Union[DatasetDict, Dataset, IterableDatasetDict, IterableDataset]:
...
# ... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7767/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7767/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7766 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7766/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7766/comments | https://api.github.com/repos/huggingface/datasets/issues/7766/events | https://github.com/huggingface/datasets/issues/7766 | 3,411,611,165 | I_kwDODunzps7LWQ4d | 7,766 | cast columns to Image/Audio/Video with `storage_options` | {
"avatar_url": "https://avatars.githubusercontent.com/u/13214530?v=4",
"events_url": "https://api.github.com/users/ain-soph/events{/privacy}",
"followers_url": "https://api.github.com/users/ain-soph/followers",
"following_url": "https://api.github.com/users/ain-soph/following{/other_user}",
"gists_url": "htt... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | [
"A",
"1",
"1",
"Ok",
"> ### Feature request\n> Allow `storage_options` to be passed in\n> \n> 1. `cast` related operations (e.g., `cast_columns, cast`)\n> 2. `info` related reading (e.g., `from_dict, from_pandas, from_polars`) together with `info.features`\n> \n> import datasets\n> \n> image_path = \"s3://b... | 2025-09-12T18:51:01 | 2025-09-27T08:14:47 | null | NONE | null | null | null | null | ### Feature request
Allow `storage_options` to be passed in
1. `cast` related operations (e.g., `cast_columns, cast`)
2. `info` related reading (e.g., `from_dict, from_pandas, from_polars`) together with `info.features`
```python3
import datasets
image_path = "s3://bucket/sample.png"
dataset = datasets.Dataset.from_d... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7766/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7766/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7765 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7765/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7765/comments | https://api.github.com/repos/huggingface/datasets/issues/7765/events | https://github.com/huggingface/datasets/issues/7765 | 3,411,556,378 | I_kwDODunzps7LWDga | 7,765 | polars dataset cannot cast column to Image/Audio/Video | {
"avatar_url": "https://avatars.githubusercontent.com/u/13214530?v=4",
"events_url": "https://api.github.com/users/ain-soph/events{/privacy}",
"followers_url": "https://api.github.com/users/ain-soph/followers",
"following_url": "https://api.github.com/users/ain-soph/following{/other_user}",
"gists_url": "htt... | [] | closed | false | null | [] | [
"I fixed this with a combination of `to_dict` and `from_dict`:\n\n```py\ndatasets.Dataset.from_dict(df.to_dict(as_series=False))\n```",
"@samuelstevens Yeah, I'm using similar workaround as well. But it would be ideal if we can avoid the copy."
] | 2025-09-12T18:32:49 | 2025-10-13T14:39:48 | 2025-10-13T14:39:48 | NONE | null | null | null | null | ### Describe the bug
`from_polars` dataset cannot cast column to Image/Audio/Video, while it works on `from_pandas` and `from_dict`
### Steps to reproduce the bug
```python3
import datasets
import pandas as pd
import polars as pl
image_path = "./sample.png"
# polars
df = pl.DataFrame({"image_path": [image_path]})
... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7765/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7765/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 30 days, 20:06:59 |
https://api.github.com/repos/huggingface/datasets/issues/7760 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7760/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7760/comments | https://api.github.com/repos/huggingface/datasets/issues/7760/events | https://github.com/huggingface/datasets/issues/7760 | 3,401,799,485 | I_kwDODunzps7Kw1c9 | 7,760 | Hugging Face Hub Dataset Upload CAS Error | {
"avatar_url": "https://avatars.githubusercontent.com/u/142820182?v=4",
"events_url": "https://api.github.com/users/n-bkoe/events{/privacy}",
"followers_url": "https://api.github.com/users/n-bkoe/followers",
"following_url": "https://api.github.com/users/n-bkoe/following{/other_user}",
"gists_url": "https://... | [] | open | false | null | [] | [
"cc @jsulz maybe ?",
"Curious! I took a look at this and was unable to see why this would be occurring on our side. Tagging in @jgodlew and @bpronan since they might have insights. \n\n@n-bkoe just a few questions if you wouldn't mind: \n1. What kind of data are you uploading and what is the difference in file si... | 2025-09-10T10:01:19 | 2025-09-16T20:01:36 | null | NONE | null | null | null | null | ### Describe the bug
Experiencing persistent 401 Unauthorized errors when attempting to upload datasets to Hugging Face Hub using the `datasets` library. The error occurs specifically with the CAS (Content Addressable Storage) service during the upload process. Tried using HF_HUB_DISABLE_XET=1. It seems to work for sm... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7760/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7760/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7759 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7759/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7759/comments | https://api.github.com/repos/huggingface/datasets/issues/7759/events | https://github.com/huggingface/datasets/issues/7759 | 3,398,099,513 | I_kwDODunzps7KiuI5 | 7,759 | Comment/feature request: Huggingface 502s from GHA | {
"avatar_url": "https://avatars.githubusercontent.com/u/52365471?v=4",
"events_url": "https://api.github.com/users/Scott-Simmons/events{/privacy}",
"followers_url": "https://api.github.com/users/Scott-Simmons/followers",
"following_url": "https://api.github.com/users/Scott-Simmons/following{/other_user}",
"g... | [] | open | false | null | [] | [] | 2025-09-09T11:59:20 | 2025-09-09T13:02:28 | null | NONE | null | null | null | null | This is no longer a pressing issue, but for completeness I am reporting that in August 26th, GET requests to `/static-proxy?url=https%3A%2F%2Fdatasets-server.huggingface.co%2Finfo%5C%3Fdataset%5C%3Dlivebench%2Fmath%60 were returning 502s when invoked from [github actions](https://github.com/UKGovernmentBEIS/inspect_evals/actions/runs/17241892475/job/489211... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7759/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7759/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7758 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7758/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7758/comments | https://api.github.com/repos/huggingface/datasets/issues/7758/events | https://github.com/huggingface/datasets/issues/7758 | 3,395,590,783 | I_kwDODunzps7KZJp_ | 7,758 | Option for Anonymous Dataset link | {
"avatar_url": "https://avatars.githubusercontent.com/u/38985481?v=4",
"events_url": "https://api.github.com/users/egrace479/events{/privacy}",
"followers_url": "https://api.github.com/users/egrace479/followers",
"following_url": "https://api.github.com/users/egrace479/following{/other_user}",
"gists_url": "... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | [] | 2025-09-08T20:20:10 | 2025-09-08T20:20:10 | null | NONE | null | null | null | null | ### Feature request
Allow for anonymized viewing of datasets. For instance, something similar to [Anonymous GitHub](https://anonymous.4open.science/).
### Motivation
We generally publish our data through Hugging Face. This has worked out very well as it's both our repository and archive (thanks to the DOI feature!).... | null | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7758/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7758/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7757 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7757/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7757/comments | https://api.github.com/repos/huggingface/datasets/issues/7757/events | https://github.com/huggingface/datasets/issues/7757 | 3,389,535,011 | I_kwDODunzps7KCDMj | 7,757 | Add support for `.conll` file format in datasets | {
"avatar_url": "https://avatars.githubusercontent.com/u/88763593?v=4",
"events_url": "https://api.github.com/users/namesarnav/events{/privacy}",
"followers_url": "https://api.github.com/users/namesarnav/followers",
"following_url": "https://api.github.com/users/namesarnav/following{/other_user}",
"gists_url"... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | [
"That would be cool ! feel free to ping me if I can help reviewing a PR"
] | 2025-09-06T07:25:39 | 2025-09-10T14:22:48 | null | NONE | null | null | null | null | ### Feature request
I’d like to request native support in the Hugging Face datasets library for reading .conll files (CoNLL format). This format is widely used in NLP tasks, especially for Named Entity Recognition (NER), POS tagging, and other token classification problems.
Right now `.conll` datasets need to be manu... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7757/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7757/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7756 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7756/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7756/comments | https://api.github.com/repos/huggingface/datasets/issues/7756/events | https://github.com/huggingface/datasets/issues/7756 | 3,387,076,693 | I_kwDODunzps7J4rBV | 7,756 | datasets.map(f, num_proc=N) hangs with N>1 when run on import | {
"avatar_url": "https://avatars.githubusercontent.com/u/20065?v=4",
"events_url": "https://api.github.com/users/arjunguha/events{/privacy}",
"followers_url": "https://api.github.com/users/arjunguha/followers",
"following_url": "https://api.github.com/users/arjunguha/following{/other_user}",
"gists_url": "htt... | [] | open | false | null | [] | [] | 2025-09-05T10:32:01 | 2025-09-05T10:32:01 | null | NONE | null | null | null | null | ### Describe the bug
If you `import` a module that runs `datasets.map(f, num_proc=N)` at the top-level, Python hangs.
### Steps to reproduce the bug
1. Create a file that runs datasets.map at the top-level:
```bash
cat <<EOF > import_me.py
import datasets
the_dataset = datasets.load_dataset("openai/openai_humanev... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7756/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7756/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7753 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7753/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7753/comments | https://api.github.com/repos/huggingface/datasets/issues/7753/events | https://github.com/huggingface/datasets/issues/7753 | 3,381,831,487 | I_kwDODunzps7Jkqc_ | 7,753 | datasets massively slows data reads, even in memory | {
"avatar_url": "https://avatars.githubusercontent.com/u/1191040?v=4",
"events_url": "https://api.github.com/users/lrast/events{/privacy}",
"followers_url": "https://api.github.com/users/lrast/followers",
"following_url": "https://api.github.com/users/lrast/following{/other_user}",
"gists_url": "https://api.g... | [] | open | false | null | [] | [
"Hi ! you should try\n\n```python\nfrom datasets import Array3D, Dataset, Features, Value\n\nfeatures = Features({\"image\": Array3D(shape=(3, 224, 224), dtype=\"uint8\"), \"label\": Value(\"uint8\")})\nhf_dataset = Dataset.from_dict({'image': images, 'label':labels}, features=features)\n```\n\notherwise the type o... | 2025-09-04T01:45:24 | 2025-09-18T22:08:51 | null | NONE | null | null | null | null | ### Describe the bug
Loading image data in a huggingface dataset results in very slow read speeds, approximately 1000 times longer than reading the same data from a pytorch dataset. This applies even when the dataset is loaded into RAM using a `keep_in_memory=True` flag.
The following script reproduces the result wit... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7753/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7753/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7751 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7751/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7751/comments | https://api.github.com/repos/huggingface/datasets/issues/7751/events | https://github.com/huggingface/datasets/issues/7751 | 3,358,369,976 | I_kwDODunzps7ILKi4 | 7,751 | Dill version update | {
"avatar_url": "https://avatars.githubusercontent.com/u/98005188?v=4",
"events_url": "https://api.github.com/users/Navanit-git/events{/privacy}",
"followers_url": "https://api.github.com/users/Navanit-git/followers",
"following_url": "https://api.github.com/users/Navanit-git/following{/other_user}",
"gists_u... | [] | open | false | null | [] | [
"#7752 ",
"related: #7510 "
] | 2025-08-27T07:38:30 | 2025-09-10T14:24:02 | null | NONE | null | null | null | null | ### Describe the bug
Why the datasets is not updating the dill ?
Just want to know if I update the dill version in dill what will be the repucssion.
For now in multiplaces I have to update the library like process requirequire dill 0.4.0 so why not datasets.
Adding a pr too.
### Steps to reproduce the bug
.
###... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7751/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7751/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7746 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7746/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7746/comments | https://api.github.com/repos/huggingface/datasets/issues/7746/events | https://github.com/huggingface/datasets/issues/7746 | 3,345,391,211 | I_kwDODunzps7HZp5r | 7,746 | Fix: Canonical 'multi_news' dataset is broken and should be updated to a Parquet version | {
"avatar_url": "https://avatars.githubusercontent.com/u/187888489?v=4",
"events_url": "https://api.github.com/users/Awesome075/events{/privacy}",
"followers_url": "https://api.github.com/users/Awesome075/followers",
"following_url": "https://api.github.com/users/Awesome075/following{/other_user}",
"gists_url... | [] | open | false | null | [] | [
"@sayakpaul @a-r-r-o-w could you verify this issue then i can contribute to solve this issue!😊"
] | 2025-08-22T12:52:03 | 2025-08-27T20:23:35 | null | NONE | null | null | null | null | Hi,
The canonical `multi_news` dataset is currently broken and fails to load. This is because it points to the [alexfabri/multi_news](https://huggingface.co/datasets/alexfabbri/multi_news) repository, which contains a legacy loading script (`multi_news.py`) that requires the now-removed `trust_remote_code` parameter.... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7746/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7746/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7745 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7745/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7745/comments | https://api.github.com/repos/huggingface/datasets/issues/7745/events | https://github.com/huggingface/datasets/issues/7745 | 3,345,286,773 | I_kwDODunzps7HZQZ1 | 7,745 | Audio mono argument no longer supported, despite class documentation | {
"avatar_url": "https://avatars.githubusercontent.com/u/5666041?v=4",
"events_url": "https://api.github.com/users/jheitz/events{/privacy}",
"followers_url": "https://api.github.com/users/jheitz/followers",
"following_url": "https://api.github.com/users/jheitz/following{/other_user}",
"gists_url": "https://ap... | [] | open | false | null | [] | [
"I want to solve this problem can you please assign it to me\nand also can you please guide whether the mono parameter is required to be re-added or the documentation needs an update?"
] | 2025-08-22T12:15:41 | 2025-08-24T18:22:41 | null | NONE | null | null | null | null | ### Describe the bug
Either update the documentation, or re-introduce the flag (and corresponding logic to convert the audio to mono)
### Steps to reproduce the bug
Audio(sampling_rate=16000, mono=True) raises the error
TypeError: Audio.__init__() got an unexpected keyword argument 'mono'
However, in the class doc... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7745/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7745/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7744 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7744/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7744/comments | https://api.github.com/repos/huggingface/datasets/issues/7744/events | https://github.com/huggingface/datasets/issues/7744 | 3,343,510,686 | I_kwDODunzps7HSeye | 7,744 | dtype: ClassLabel is not parsed correctly in `features.py` | {
"avatar_url": "https://avatars.githubusercontent.com/u/43553003?v=4",
"events_url": "https://api.github.com/users/cmatKhan/events{/privacy}",
"followers_url": "https://api.github.com/users/cmatKhan/followers",
"following_url": "https://api.github.com/users/cmatKhan/following{/other_user}",
"gists_url": "htt... | [] | closed | false | null | [] | [
"I think it's \"class_label\"",
"> I think it's \"class_label\"\n\nI see -- thank you. This works\n\n```yaml\nlicense: mit\nlanguage:\n- en\ntags:\n- genomics\n- yeast\n- transcription\n- perturbation\n- response\n- overexpression\npretty_name: Hackett, 2020 Overexpression\nsize_categories:\n- 1M<n<10M\ndataset_i... | 2025-08-21T23:28:50 | 2025-09-10T15:23:41 | 2025-09-10T15:23:41 | NONE | null | null | null | null | `dtype: ClassLabel` in the README.md yaml metadata is parsed incorrectly and causes the data viewer to fail.
This yaml in my metadata ([source](https://huggingface.co/datasets/BrentLab/yeast_genome_resources/blob/main/README.md), though i changed `ClassLabel` to `string` to using different dtype in order to avoid the ... | {
"avatar_url": "https://avatars.githubusercontent.com/u/43553003?v=4",
"events_url": "https://api.github.com/users/cmatKhan/events{/privacy}",
"followers_url": "https://api.github.com/users/cmatKhan/followers",
"following_url": "https://api.github.com/users/cmatKhan/following{/other_user}",
"gists_url": "htt... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7744/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7744/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 19 days, 15:54:51 |
https://api.github.com/repos/huggingface/datasets/issues/7742 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7742/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7742/comments | https://api.github.com/repos/huggingface/datasets/issues/7742/events | https://github.com/huggingface/datasets/issues/7742 | 3,336,704,928 | I_kwDODunzps7G4hOg | 7,742 | module 'pyarrow' has no attribute 'PyExtensionType' | {
"avatar_url": "https://avatars.githubusercontent.com/u/6106392?v=4",
"events_url": "https://api.github.com/users/mnedelko/events{/privacy}",
"followers_url": "https://api.github.com/users/mnedelko/followers",
"following_url": "https://api.github.com/users/mnedelko/following{/other_user}",
"gists_url": "http... | [] | open | false | null | [] | [
"Just checked out the files and thishad already been addressed",
"For others who find this issue: \n\n`pip install --upgrade \"datasets>=2.20.0\"` \n\nfrom https://github.com/explodinggradients/ragas/issues/2170#issuecomment-3204393672 can fix it."
] | 2025-08-20T06:14:33 | 2025-09-09T02:51:46 | null | NONE | null | null | null | null | ### Describe the bug
When importing certain libraries, users will encounter the following error which can be traced back to the datasets library.
module 'pyarrow' has no attribute 'PyExtensionType'.
Example issue: https://github.com/explodinggradients/ragas/issues/2170
The issue occurs due to the following. I will ... | null | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7742/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7742/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7741 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7741/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7741/comments | https://api.github.com/repos/huggingface/datasets/issues/7741/events | https://github.com/huggingface/datasets/issues/7741 | 3,334,848,656 | I_kwDODunzps7GxcCQ | 7,741 | Preserve tree structure when loading HDF5 | {
"avatar_url": "https://avatars.githubusercontent.com/u/17013474?v=4",
"events_url": "https://api.github.com/users/klamike/events{/privacy}",
"followers_url": "https://api.github.com/users/klamike/followers",
"following_url": "https://api.github.com/users/klamike/following{/other_user}",
"gists_url": "https:... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | [] | [] | 2025-08-19T15:42:05 | 2025-08-26T15:28:06 | 2025-08-26T15:28:06 | CONTRIBUTOR | null | null | null | null | ### Feature request
https://github.com/huggingface/datasets/pull/7740#discussion_r2285605374
### Motivation
`datasets` has the `Features` class for representing nested features. HDF5 files have groups of datasets which are nested, though in #7690 the keys are flattened. We should preserve that structure for the user... | {
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https:... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7741/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7741/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 6 days, 23:46:01 |
https://api.github.com/repos/huggingface/datasets/issues/7739 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7739/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7739/comments | https://api.github.com/repos/huggingface/datasets/issues/7739/events | https://github.com/huggingface/datasets/issues/7739 | 3,331,537,762 | I_kwDODunzps7Gkzti | 7,739 | Replacement of "Sequence" feature with "List" breaks backward compatibility | {
"avatar_url": "https://avatars.githubusercontent.com/u/15764776?v=4",
"events_url": "https://api.github.com/users/evmaki/events{/privacy}",
"followers_url": "https://api.github.com/users/evmaki/followers",
"following_url": "https://api.github.com/users/evmaki/following{/other_user}",
"gists_url": "https://a... | [] | open | false | null | [] | [
"Backward compatibility here means 4.0.0 can load datasets saved with older versions.\n\nYou will need 4.0.0 to load datasets saved with 4.0.0"
] | 2025-08-18T17:28:38 | 2025-09-10T14:17:50 | null | NONE | null | null | null | null | PR #7634 replaced the Sequence feature with List in 4.0.0, so datasets saved with version 4.0.0 with that feature cannot be loaded by earlier versions. There is no clear option in 4.0.0 to use the legacy feature type to preserve backward compatibility.
Why is this a problem? I have a complex preprocessing and training... | null | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7739/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7739/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7738 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7738/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7738/comments | https://api.github.com/repos/huggingface/datasets/issues/7738/events | https://github.com/huggingface/datasets/issues/7738 | 3,328,948,690 | I_kwDODunzps7Ga7nS | 7,738 | Allow saving multi-dimensional ndarray with dynamic shapes | {
"avatar_url": "https://avatars.githubusercontent.com/u/82735346?v=4",
"events_url": "https://api.github.com/users/ryan-minato/events{/privacy}",
"followers_url": "https://api.github.com/users/ryan-minato/followers",
"following_url": "https://api.github.com/users/ryan-minato/following{/other_user}",
"gists_u... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | [
"I agree this would be super valuable.\n\nIt looks like this was discussed a few years ago in https://github.com/huggingface/datasets/issues/5272#issuecomment-1550200824 but there were some issues. Those PRs are merged now and it looks like Arrow [officially supports](https://arrow.apache.org/docs/format/CanonicalE... | 2025-08-18T02:23:51 | 2025-08-26T15:25:02 | null | NONE | null | null | null | null | ### Feature request
I propose adding a dedicated feature to the datasets library that allows for the efficient storage and retrieval of multi-dimensional ndarray with dynamic shapes. Similar to how Image columns handle variable-sized images, this feature would provide a structured way to store array data where the dim... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7738/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7738/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7733 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7733/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7733/comments | https://api.github.com/repos/huggingface/datasets/issues/7733/events | https://github.com/huggingface/datasets/issues/7733 | 3,304,979,299 | I_kwDODunzps7E_ftj | 7,733 | Dataset Repo Paths to Locally Stored Images Not Being Appended to Image Path | {
"avatar_url": "https://avatars.githubusercontent.com/u/27898715?v=4",
"events_url": "https://api.github.com/users/dennys246/events{/privacy}",
"followers_url": "https://api.github.com/users/dennys246/followers",
"following_url": "https://api.github.com/users/dennys246/following{/other_user}",
"gists_url": "... | [] | closed | false | null | [] | [
"This is the download issues I come into, about ever other time it fails...\n<img width=\"1719\" height=\"1226\" alt=\"Image\" src=\"https://github.com/user-attachments/assets/2e5b4b3e-7c13-4bad-a77c-34b47a932831\" />",
"I’m guessing this is just a feature so I’m going to close this thread. I also altered my load... | 2025-08-08T19:10:58 | 2025-10-07T04:47:36 | 2025-10-07T04:32:48 | NONE | null | null | null | null | ### Describe the bug
I’m not sure if this is a bug or a feature and I just don’t fully understand how dataset loading is to work, but it appears there may be a bug with how locally stored Image() are being accessed. I’ve uploaded a new dataset to hugging face (rmdig/rocky_mountain_snowpack) but I’ve come into a ton of... | {
"avatar_url": "https://avatars.githubusercontent.com/u/27898715?v=4",
"events_url": "https://api.github.com/users/dennys246/events{/privacy}",
"followers_url": "https://api.github.com/users/dennys246/followers",
"following_url": "https://api.github.com/users/dennys246/following{/other_user}",
"gists_url": "... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7733/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7733/timeline | null | completed | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | 59 days, 9:21:50 |
https://api.github.com/repos/huggingface/datasets/issues/7732 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7732/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7732/comments | https://api.github.com/repos/huggingface/datasets/issues/7732/events | https://github.com/huggingface/datasets/issues/7732 | 3,304,673,383 | I_kwDODunzps7E-VBn | 7,732 | webdataset: key errors when `field_name` has upper case characters | {
"avatar_url": "https://avatars.githubusercontent.com/u/29985433?v=4",
"events_url": "https://api.github.com/users/YassineYousfi/events{/privacy}",
"followers_url": "https://api.github.com/users/YassineYousfi/followers",
"following_url": "https://api.github.com/users/YassineYousfi/following{/other_user}",
"g... | [] | open | false | null | [] | [] | 2025-08-08T16:56:42 | 2025-08-08T16:56:42 | null | CONTRIBUTOR | null | null | null | null | ### Describe the bug
When using a webdataset each sample can be a collection of different "fields"
like this:
```
images17/image194.left.jpg
images17/image194.right.jpg
images17/image194.json
images17/image12.left.jpg
images17/image12.right.jpg
images17/image12.json
```
if the field_name contains upper case characte... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7732/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7732/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7731 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7731/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7731/comments | https://api.github.com/repos/huggingface/datasets/issues/7731/events | https://github.com/huggingface/datasets/issues/7731 | 3,303,637,075 | I_kwDODunzps7E6YBT | 7,731 | Add the possibility of a backend for audio decoding | {
"avatar_url": "https://avatars.githubusercontent.com/u/142020129?v=4",
"events_url": "https://api.github.com/users/intexcor/events{/privacy}",
"followers_url": "https://api.github.com/users/intexcor/followers",
"following_url": "https://api.github.com/users/intexcor/following{/other_user}",
"gists_url": "ht... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | [
"is there a work around im stuck",
"never mind just downgraded"
] | 2025-08-08T11:08:56 | 2025-08-20T16:29:33 | null | NONE | null | null | null | null | ### Feature request
Add the possibility of a backend for audio decoding. Before version 4.0.0, soundfile was used, and now torchcodec is used, but the problem is that torchcodec requires ffmpeg, which is problematic to install on the same colab. Therefore, I suggest adding a decoder selection when loading the dataset.... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7731/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7731/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7729 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7729/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7729/comments | https://api.github.com/repos/huggingface/datasets/issues/7729/events | https://github.com/huggingface/datasets/issues/7729 | 3,300,672,954 | I_kwDODunzps7EvEW6 | 7,729 | OSError: libcudart.so.11.0: cannot open shared object file: No such file or directory | {
"avatar_url": "https://avatars.githubusercontent.com/u/115183904?v=4",
"events_url": "https://api.github.com/users/SaleemMalikAI/events{/privacy}",
"followers_url": "https://api.github.com/users/SaleemMalikAI/followers",
"following_url": "https://api.github.com/users/SaleemMalikAI/following{/other_user}",
"... | [] | open | false | null | [] | [
"Is this related to the \"datasets\" library? @SaleemMalikAI "
] | 2025-08-07T14:07:23 | 2025-09-24T02:17:15 | null | NONE | null | null | null | null | > Hi is there any solution for that eror i try to install this one
pip install torch==1.12.1+cpu torchaudio==0.12.1+cpu -f https://download.pytorch.org/whl/torch_stable.html
this is working fine but tell me how to install pytorch version that is fit for gpu | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7729/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7729/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7728 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7728/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7728/comments | https://api.github.com/repos/huggingface/datasets/issues/7728/events | https://github.com/huggingface/datasets/issues/7728 | 3,298,854,904 | I_kwDODunzps7EoIf4 | 7,728 | NonMatchingSplitsSizesError and ExpectedMoreSplitsError | {
"avatar_url": "https://avatars.githubusercontent.com/u/104755879?v=4",
"events_url": "https://api.github.com/users/efsotr/events{/privacy}",
"followers_url": "https://api.github.com/users/efsotr/followers",
"following_url": "https://api.github.com/users/efsotr/following{/other_user}",
"gists_url": "https://... | [] | open | false | null | [] | [
"To load just one shard without errors, you should use data_files directly with split set to \"train\", but don’t specify \"allenai/c4\", since that points to the full dataset with all shards.\n\nInstead, do this:\n```\nfrom datasets import load_dataset\nfrom datasets import load_dataset\n\n# Load only one shard of... | 2025-08-07T04:04:50 | 2025-10-06T21:08:39 | null | NONE | null | null | null | null | ### Describe the bug
When loading dataset, the info specified by `data_files` did not overwrite the original info.
### Steps to reproduce the bug
```python
from datasets import load_dataset
traindata = load_dataset(
"allenai/c4",
"en",
data_files={"train": "en/c4-train.00000-of-... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7728/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7728/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7727 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7727/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7727/comments | https://api.github.com/repos/huggingface/datasets/issues/7727/events | https://github.com/huggingface/datasets/issues/7727 | 3,295,718,578 | I_kwDODunzps7EcKyy | 7,727 | config paths that start with ./ are not valid as hf:// accessed repos, but are valid when accessed locally | {
"avatar_url": "https://avatars.githubusercontent.com/u/2229300?v=4",
"events_url": "https://api.github.com/users/doctorpangloss/events{/privacy}",
"followers_url": "https://api.github.com/users/doctorpangloss/followers",
"following_url": "https://api.github.com/users/doctorpangloss/following{/other_user}",
... | [] | open | false | null | [] | [] | 2025-08-06T08:21:37 | 2025-08-06T08:21:37 | null | NONE | null | null | null | null | ### Describe the bug
```
- config_name: some_config
data_files:
- split: train
path:
- images/xyz/*.jpg
```
will correctly download but
```
- config_name: some_config
data_files:
- split: train
path:
- ./images/xyz/*.jpg
```
will error with `FileNotFoundError` due to improper url joining. `loa... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7727/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7727/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7724 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7724/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7724/comments | https://api.github.com/repos/huggingface/datasets/issues/7724/events | https://github.com/huggingface/datasets/issues/7724 | 3,292,315,241 | I_kwDODunzps7EPL5p | 7,724 | Can not stepinto load_dataset.py? | {
"avatar_url": "https://avatars.githubusercontent.com/u/13776012?v=4",
"events_url": "https://api.github.com/users/micklexqg/events{/privacy}",
"followers_url": "https://api.github.com/users/micklexqg/followers",
"following_url": "https://api.github.com/users/micklexqg/following{/other_user}",
"gists_url": "... | [] | open | false | null | [] | [] | 2025-08-05T09:28:51 | 2025-08-05T09:28:51 | null | NONE | null | null | null | null | I set a breakpoint in "load_dataset.py" and try to debug my data load codes, but it does not stop at any breakpoints, so "load_dataset.py" can not be stepped into ?
<!-- Failed to upload "截图 2025-08-05 17-25-18.png" --> | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7724/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7724/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7723 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7723/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7723/comments | https://api.github.com/repos/huggingface/datasets/issues/7723/events | https://github.com/huggingface/datasets/issues/7723 | 3,289,943,261 | I_kwDODunzps7EGIzd | 7,723 | Don't remove `trust_remote_code` arg!!! | {
"avatar_url": "https://avatars.githubusercontent.com/u/758925?v=4",
"events_url": "https://api.github.com/users/autosquid/events{/privacy}",
"followers_url": "https://api.github.com/users/autosquid/followers",
"following_url": "https://api.github.com/users/autosquid/following{/other_user}",
"gists_url": "ht... | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | [] | [] | 2025-08-04T15:42:07 | 2025-08-04T15:42:07 | null | NONE | null | null | null | null | ### Feature request
defaulting it to False is nice balance. we need manully setting it to True in certain scenarios!
Add `trust_remote_code` arg back please!
### Motivation
defaulting it to False is nice balance. we need manully setting it to True in certain scenarios!
### Your contribution
defaulting it to Fals... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7723/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7723/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7722 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7722/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7722/comments | https://api.github.com/repos/huggingface/datasets/issues/7722/events | https://github.com/huggingface/datasets/issues/7722 | 3,289,741,064 | I_kwDODunzps7EFXcI | 7,722 | Out of memory even though using load_dataset(..., streaming=True) | {
"avatar_url": "https://avatars.githubusercontent.com/u/3961950?v=4",
"events_url": "https://api.github.com/users/padmalcom/events{/privacy}",
"followers_url": "https://api.github.com/users/padmalcom/followers",
"following_url": "https://api.github.com/users/padmalcom/following{/other_user}",
"gists_url": "h... | [] | open | false | null | [] | [] | 2025-08-04T14:41:55 | 2025-08-04T14:41:55 | null | NONE | null | null | null | null | ### Describe the bug
I am iterating over a large dataset that I load using streaming=True to avoid running out of memory. Unfortunately, I am observing that memory usage increases over time and I'm finally running in an oom.
### Steps to reproduce the bug
```
ds = load_dataset("openslr/librispeech_asr", split="tra... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7722/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7722/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7721 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7721/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7721/comments | https://api.github.com/repos/huggingface/datasets/issues/7721/events | https://github.com/huggingface/datasets/issues/7721 | 3,289,426,104 | I_kwDODunzps7EEKi4 | 7,721 | Bad split error message when using percentages | {
"avatar_url": "https://avatars.githubusercontent.com/u/3961950?v=4",
"events_url": "https://api.github.com/users/padmalcom/events{/privacy}",
"followers_url": "https://api.github.com/users/padmalcom/followers",
"following_url": "https://api.github.com/users/padmalcom/following{/other_user}",
"gists_url": "h... | [] | open | false | null | [] | [
"I'd like to work on this: add clearer validation/messages for percent-based splits + tests",
"The most basic example is this code:\n`load_dataset(\"openslr/librispeech_asr\", split=\"train[10%:20%]\")`\n\nThis results in this ValueError:\n```\n raise ValueError(f'Unknown split \"{split}\". Should be one of {l... | 2025-08-04T13:20:25 | 2025-08-14T14:42:24 | null | NONE | null | null | null | null | ### Describe the bug
Hi, I'm trying to download a dataset. To not load the entire dataset in memory, I split it as described [here](https://huggingface.co/docs/datasets/v4.0.0/loading#slice-splits) in 10% steps.
When doing so, the library returns this error:
raise ValueError(f"Bad split: {split}. Available splits... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7721/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7721/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
https://api.github.com/repos/huggingface/datasets/issues/7720 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7720/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7720/comments | https://api.github.com/repos/huggingface/datasets/issues/7720/events | https://github.com/huggingface/datasets/issues/7720 | 3,287,150,513 | I_kwDODunzps7D7e-x | 7,720 | Datasets 4.0 map function causing column not found | {
"avatar_url": "https://avatars.githubusercontent.com/u/55143337?v=4",
"events_url": "https://api.github.com/users/Darejkal/events{/privacy}",
"followers_url": "https://api.github.com/users/Darejkal/followers",
"following_url": "https://api.github.com/users/Darejkal/following{/other_user}",
"gists_url": "htt... | [] | open | false | null | [] | [
"Hi, I tried to reproduce this issue on the latest `main` branch but it seems to be working correctly now. My test script (which creates a dummy dataset and applies the `.map()` function) successfully creates and accesses the new column without a `KeyError`.\n\nIt's possible this was fixed by a recent commit. The m... | 2025-08-03T12:52:34 | 2025-08-07T19:23:34 | null | NONE | null | null | null | null | ### Describe the bug
Column returned after mapping is not found in new instance of the dataset.
### Steps to reproduce the bug
Code for reproduction. After running get_total_audio_length, it is errored out due to `data` not having `duration`
```
def compute_duration(x):
return {"duration": len(x["audio"]["array"... | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/7720/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/7720/timeline | null | null | {
"completed": 0,
"percent_completed": 0,
"total": 0
} | {
"blocked_by": 0,
"blocking": 0,
"total_blocked_by": 0,
"total_blocking": 0
} | false | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.