Interactive Intelligence from Human Xperience
Xperience-10M
Dataset Summary
Xperience-10M is a large-scale egocentric multimodal dataset of human experience for embodied AI, robotics, world models, and spatial intelligence.
It contains 10 million experiences (interaction) and 10,000 hours of synchronized first-person recordings with six video streams, audio, stereo depth, camera pose, hand mocap, full-body mocap, IMU, and hierarchical language annotations. With 2.88 billion RGB frames, 720 million depth frames, 576 million pose and mocap frames, and ~1 PB of total data, Xperience-10M is, to our knowledge, by far the largest egocentric dataset with structured 3D/4D multimodal annotations.
Xperience-10M is built for training and evaluating models that do not just see the world, but also understand motion, geometry, interaction, and embodied behavior as a unified stream of experience.
It is designed to support research and product development in:
- embodied AI
- world modeling
- robot learning from human experience
- egocentric perception
- action understanding
- multimodal foundation models
- human-object interaction
- sensor fusion
- 3D/4D scene and motion understanding
- real-to-sim and sim-to-real pipelines
Check out xperience-10m-sample for data sample (coffee making!).
Check out HOMIE-toolkit for sample code to load/visualize Xperience data.
What makes Xperience-10M different
Most existing egocentric datasets provide only a partial view of embodied experience: RGB video, sparse labels, or limited motion signals. Xperience-10M is designed differently.
It treats experience as a multimodal, structured, and temporally grounded signal. Each episode can include:
- what the wearer sees
- what the wearer hears
- how the camera moves through space
- how the hands move
- how the full body articulates
- what the depth geometry looks like
- what the IMU measures
- what task, subtask, action, interaction, and objects are involved
This makes Xperience-10M especially useful for building systems that learn from real human experience at scale.
Supported Tasks and Use Cases
Xperience-10M can support a broad range of tasks, including but not limited to:
- egocentric action recognition
- task and subtask prediction
- action captioning
- temporal action localization
- human-object interaction understanding
- object grounding and recognition
- audio-visual learning
- visual-language pretraining
- embodied reasoning
- stereo and monocular depth estimation
- visual odometry and trajectory learning
- SLAM and camera pose estimation
- hand pose estimation
- body motion estimation
- multimodal sensor fusion
- imitation learning and behavior modeling
- policy learning for robotics
- world model training
Languages
The language annotations are in English.
Dataset Structure
Xperience-10M is organized as a collection of episodes. Each episode contains synchronized egocentric video files together with a unified annotation.hdf5 file storing annotations, calibration, geometry, motion, inertial signals, and metadata.
Episode Layout
A typical episode folder contains:
episode/
βββ fisheye_cam0.mp4 # fisheye camera 0
βββ fisheye_cam1.mp4 # fisheye camera 1
βββ fisheye_cam2.mp4 # fisheye camera 2
βββ fisheye_cam3.mp4 # fisheye camera 3
βββ stereo_left.mp4 # rectified stereo left
βββ stereo_right.mp4 # rectified stereo right
βββ annotation.hdf5 # all annotations and metadata
Modalities
Each episode may include the following modalities:
Four fisheye video streams
Two rectified stereo video streams
Audio aligned with all video streams
Stereo depth
Camera pose / SLAM trajectory
Two-hand motion capture
Full-body motion capture
IMU
Episode metadata
Hierarchical language captions, including:
- task
- subtask
- action
- interaction
- objects
HDF5 Annotation Structure
The annotation.hdf5 file stores synchronized annotations and metadata in the following structure:
annotation.hdf5
βββ calibration/
β βββ cam0/
β βββ cam1/
β βββ cam2/
β βββ cam3/
β βββ cam01/
βββ slam/
β βββ quat_wxyz
β βββ trans_xyz
β βββ frame_names
β βββ point_cloud
βββ depth/
β βββ depth
β βββ confidence
β βββ scale
β βββ depth_min
β βββ depth_max
βββ hand_mocap/
β βββ left_joints_3d
β βββ right_joints_3d
β βββ left_translation
β βββ right_translation
β βββ left_mano_hand_pose
β βββ right_mano_hand_pose
β βββ left_mano_hand_global_orient
β βββ right_mano_hand_global_orient
β βββ left_mano_hand_betas
β βββ right_mano_hand_betas
βββ full_body_mocap/
β βββ keypoints
β βββ contacts
β βββ Ts_world_cpf
β βββ Ts_world_root
β βββ body_quats
β βββ left_hand_quats
β βββ right_hand_quats
β βββ betas
β βββ frame_nums
βββ imu/
β βββ device_timestamp_ns
β βββ accel_xyz
β βββ gyro_xyz
β βββ keyframe_indices
βββ video/
β βββ device_timestamp
β βββ frame_number
β βββ length_sec
βββ metadata/
βββ caption
Annotation Details
Go checkout HOMIE-toolkit for more tech details of our annotations.
Key Statistics
| Statistic | Value |
|---|---|
| Total number of experiences (interactions) | 10M |
| Video with audio | 10,000 h |
| RGB frames | 2.88B |
| Depth frames | 720M |
| Camera poses | 576M |
| Mocap frames | 576M |
| IMU frames | 7.2B |
| Caption sentences | 16M |
| Caption words | 200M |
| Caption vocabulary size | 6K |
| Number of objects | 350K |
| Total storage | ~1 PB |
| Total trajectory length | 39,000 km |
Uses
Direct Use
Xperience-10M is intended for direct use in:
- multimodal pretraining
- egocentric perception research
- action understanding
- motion understanding
- 3D/4D reconstruction and tracking
- action-language grounding
- embodied foundation model training
- robotics and imitation learning
- world model training
Out-of-Scope Use
Xperience-10M is not intended for:
- identity recognition
- person re-identification
- biometric profiling
- surveillance applications
- inferring sensitive personal attributes
- safety-critical deployment without additional validation and safeguards
Limitations
Despite its scale and richness, Xperience-10M still has limitations.
- It reflects the environments, devices, and activity distributions represented in the collected data.
- Depth, pose, SLAM, and mocap annotations may contain noise or estimation error.
- Semantic annotations may not fully capture every relevant contextual factor in an episode.
- The scale of the dataset may require substantial storage and compute infrastructure for training.
Social Impact
Xperience-10M can help advance world models, embodied AI, assistive systems, spatial intelligence, and robot learning from real-world human experience.
At the same time, egocentric multimodal data raises important questions around privacy, consent, and downstream misuse. We encourage all users to work with the dataset responsibly and to align usage with privacy protection, human-centered AI principles, and beneficial real-world applications.
Privacy, Ethics, and Consent
Because Xperience-10M contains egocentric recordings of real-world human activity, privacy and consent are central considerations.
All data in Xperience-10M was collected and processed under appropriate consent and review procedures. Personally identifying or sensitive content is handled according to the dataset release policy. Access to some or all portions of the dataset may be controlled to protect participant privacy and support responsible use.
Access
Xperience-10M is released for research and other non-commercial uses under the Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license.
Because of the scale of the dataset (~1 PB) and the sensitive nature of egocentric multimodal data, access may be provided through controlled distribution channels. Users are expected to follow the dataset usage terms and any accompanying privacy, security, or redistribution requirements released with the dataset.
Before using Xperience-10M, please make sure you understand:
- the non-commercial restriction
- attribution requirements
- any privacy and responsible-use conditions associated with the data
- any additional access procedures specified by the dataset maintainers
Citation
@dataset{xperience_10m,
title={Xperience-10M: A Large-Scale Egocentric Multimodal Dataset with Structured 3D/4D Annotations},
author={Ropedia},
year={2026},
publisher={Hugging Face},
note={Dataset}
}
- Downloads last month
- 2,021
