VideoSensorCVMMLEgocentric Vision

Egocentric Vision for Accessibility AI

First-person video from accessibility users with rich metadata

Hours of content

1400+

Pricing

Custom

Overview

Comprehensive egocentric video dataset captured from users with visual impairments using smart glasses. Includes synchronized audio, OCR outputs, conversation transcripts, and motion data. Ideal for training assistive AI systems and computer vision models that understand first-person perspectives.

Highlights

  • Real-world first-person perspectives
  • Rich multimodal data: video, audio, OCR, motion
  • Diverse daily scenarios and environments
  • Synchronized data streams
  • Active collection from opt-in user base

Deliverables

Files

MP4 video (various resolutions), Synchronized audio tracks, OCR outputs with bounding boxes (JSON), Conversation transcripts, Motion sensor data

See our full dataset catalog for all available modalities, scenarios, and technical specifications. Contact us to discuss custom collection needs and access options.