The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 91, in _split_generators
pa_table = next(iter(self._generate_tables(**splits[0].gen_kwargs, allow_full_read=False)))[1]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 194, in _generate_tables
json_field_paths += find_mixed_struct_types_field_paths(examples)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/json.py", line 58, in find_mixed_struct_types_field_paths
examples = [x[subfield] for x in content if x[subfield] is not None]
~^^^^^^^^^^
KeyError: 'query_index'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
VCBench: Clipped Videos Dataset
Dataset Description
This dataset contains 4,574 clipped video segments from the VCBench (Video Counting Benchmark), designed for evaluating spatial-temporal state maintenance capabilities in video understanding models.
Dataset Summary
- Total Videos: 4,574 clips
- Total Size: ~80 GB
- Video Format: MP4 (H.264)
- Categories: 8 subcategories across object counting and event counting tasks
Categories
Object Counting (2,297 clips):
O1-Snap: Current-state snapshot (252 clips)O1-Delta: Current-state delta (98 clips)O2-Unique: Global unique counting (1,869 clips)O2-Gain: Windowed gain counting (78 clips)
Event Counting (2,277 clips):
E1-Action: Instantaneous action (1,281 clips)E1-Transit: State transition (205 clips)E2-Periodic: Periodic action (280 clips)E2-Episode: Episodic segment (511 clips)
File Naming Convention
Multi-query clips
Format: {category}_{question_id}_{query_index}.mp4
Example: e1action_0000_00.mp4, e1action_0000_01.mp4
Single-query clips
Format: {category}_{question_id}.mp4
Example: o1delta_0007.mp4, o2gain_0000.mp4
Video Properties
- Encoding: H.264 (using
-c copyfor lossless clipping) - Frame Rates: Preserved from source (3fps, 24fps, 25fps, 30fps, 60fps)
- Duration Accuracy: ±0.1s from annotation timestamps
- Quality: Original quality maintained (no re-encoding)
Source Datasets
Videos are clipped from multiple source datasets:
- YouTube walking tours and sports videos
- RoomTour3D (indoor navigation)
- Ego4D (first-person view)
- ScanNet, ScanNetPP, ARKitScenes (3D indoor scenes)
- TOMATO, CODa, OmniWorld (temporal reasoning)
- Simulated physics videos
Usage
Loading with Python
from huggingface_hub import hf_hub_download
import cv2
# Download a specific video
video_path = hf_hub_download(
repo_id="YOUR_USERNAME/VCBench",
filename="e1action_0000_00.mp4",
repo_type="dataset"
)
# Load with OpenCV
cap = cv2.VideoCapture(video_path)
Batch Download
# Install huggingface-cli
pip install huggingface_hub
# Download entire dataset
huggingface-cli download YOUR_USERNAME/VCBench --repo-type dataset --local-dir ./vcbench_videos
Annotations
For complete annotations including questions, query points, and ground truth answers, please refer to the original VCBench repository:
- Object counting annotations:
object_count_data/*.json - Event counting annotations:
event_counting_data/*.json
Each annotation file contains:
id: Question identifiersource_dataset: Original video sourcevideo_path: Original video filenamequestion: Counting questionquery_timeorquery_points: Timestamp(s) for queriescount: Ground truth answer(s)
Quality Validation
All videos have been validated for:
- ✓ Duration accuracy (100% within ±0.1s)
- ✓ Frame rate preservation (original fps maintained)
- ✓ No frame drops or speed changes
- ✓ Lossless clipping (no re-encoding artifacts)
Citation
If you use this dataset, please cite the VCBench paper:
@article{vcbench2026,
title={VCBench: A Streaming Counting Benchmark for Spatial-Temporal State Maintenance},
author={[Authors]},
journal={[Journal/Conference]},
year={2026}
}
License
MIT License - See LICENSE file for details.
Dataset Statistics
| Category | Clips | Avg Duration | Total Size |
|---|---|---|---|
| O1-Snap | 252 | ~2min | ~4.3 GB |
| O1-Delta | 98 | ~1min | ~1.7 GB |
| O2-Unique | 1,869 | ~3min | ~32 GB |
| O2-Gain | 78 | ~1min | ~1.3 GB |
| E1-Action | 1,281 | ~4min | ~28 GB |
| E1-Transit | 205 | ~2min | ~3.5 GB |
| E2-Periodic | 280 | ~3min | ~8.7 GB |
| E2-Episode | 511 | ~2min | ~4.8 GB |
| Total | 4,574 | - | ~80 GB |
Contact
For questions or issues, please open an issue in the dataset repository.
- Downloads last month
- 59