Repositories
Select a repository to view its commits, contributors, and more.spotlight_hardware_designs
Partial set of hardware designs for a Meta-developed brain computer interface (BCI) research prototype system (Spotlight).
OpenSFEDS
OpenSFEDS, a near-eye gaze estimation dataset containing approximately 2M synthetic camera-photosensor image pairs sampled at 500 Hz under varied appearance and camera position.
DVDialogues
Code for DVD A Diagnostic Dataset for Multi-step Reasoning in Video Grounded Dialogue
RF2
RF^2 is a federated recommendation learning simulation framework that can simulate realistic system-induced data heterogeneity and its effect on various tier-aware FL optimizations.
fbooja
Implements the bootstrap and jackknife methods of http://tygert.com/jdssv.pdf
svinfer
The FORT team has released differentially private Condor data to external researchers in H1 2020. It is known that analyzing DP data via classic statistical models will lead to biased conclusions. We are releasing at-scale statistical models which provide valid inference from DP data.
motion-search
perform motion search and compute motion vectors and residual information in order to extract features for predicting video compressibility
soundspaces-challenge
Starter code for SoundSpaces challenge at CVPR 21's Embodied AI workshop
preference-exploration
Code for replicating experiments from the paper, Preference Exploration for Efficient Bayesian Optimization with Multiple Outcomes, published in AISTATS 2022.
VidDetours
Official code and data release of CVPR 2024 (highlight) paper "Detours for Navigating Instructional Videos"
EgocentricUserAdaptation
In this codebase we establish a benchmark for egocentric user adaptation based on Ego4d.First, we start from a population model which has data from many users to learn user-agnostic representations.As the user gains more experience over its lifetime, we aim to tailor the general model to user-specific expert models.
pix2vec
Deep image generation is becoming a tool to enhance artists and designers creativity potential. In this paper, we aim at making the generation process more structured and easier to interact with. Inspired by vector graphics systems, we propose a new deep image reconstruction paradigm where the outputs are composed from simple layers, defined by their color and a vector transparency mask.