Reference implementations, benchmark datasets, and reproducible research releases from the Vidal Lab — including foundational releases like Hopkins 155 and ongoing migration to a centralized GitHub organization.
Three primary destinations.
Datasets and benchmarks released by the lab — Hopkins 155, MHAD, JHU-ISI gestures, and others. Migration to the Vidal Lab GitHub organization is in progress.
Reference implementations for Generalized PCA, Sparse Subspace Clustering, Dual Principal Component Pursuit, dynamical-systems distances, and other lab methods.
Active code releases from the Vidal Lab organization on GitHub — projects, course materials, and reproducibility packages.
Foundational releases that have shaped the field.
The benchmark for motion segmentation under affine and projective camera models — 155 sequences, point trajectories, ground truth.
Multimodal Human Action Dataset — synchronized video, mocap, audio, accelerometer, and depth for actions across 12 subjects.
The gold-standard dataset for surgical activity and skill assessment — robotic-surgery video paired with expert ratings.
Algorithms with code released by the lab.
If you use our data or code, please cite the original papers — and we'd love to hear what you build. Issues and pull requests are welcome on GitHub.