First-person video from accessibility users with rich metadata
Total Duration
1400+ hours
Quality Check
100% Verified
Comprehensive egocentric video dataset captured from users with visual impairments using smart glasses. Includes synchronized audio, OCR outputs, conversation transcripts, and motion data. Ideal for training assistive AI systems and computer vision models that understand first-person perspectives.
MP4 video (various resolutions), Synchronized audio tracks, OCR outputs with bounding boxes (JSON), Conversation transcripts, Motion sensor data