top of page

Acerca de

Streaks_large.png

Deconvolving Diffraction for Fast
Imaging of Sparse Scenes

Mark Sheinin, Matthew O'Toole and Srinivasa Narasimhan

TL;DR abstract
We present a bandwidth-efficient approach for capturing high-speed videos of sparse scenes.

Narrated 2min overview video (with sound effects :) )


















Abstract
 
Most computer vision techniques rely on cameras that uniformly sample the 2D image plane. However, there exists a class of applications for which the standard uniform 2D sampling of the image plane is sub-optimal. This class consists of applications where the scene points of interest occupy the image plane sparsely, and thus most pixels of the 2D camera sensor would be wasted.








Recently, diffractive optics were used in conjunction with sparse (e.g., line) sensors to achieve high-speed capture of such sparse scenes. One such approach, called “Diffraction Line Imaging”, relies on the use of diffraction gratings to spread the point-spread-function (PSF) of scene points from a point to a color-coded shape (e.g., a horizontal line) whose intersection with a line sensor enables point positioning. In this paper, we extend this approach for arbitrary diffractive optical elements and arbitrary sampling of the sensor plane using a convolution-based image formation model. Sparse scenes are then recovered by formulating a convolutional coding inverse problem that can resolve mixtures of diffraction PSFs without the use of multiple sensors, extending the application of diffraction-based imaging to a new class of significantly denser scenes. For the case of a single-axis diffraction grating, we provide an approach to determine the minimal required sensor sub-sampling for accurate scene recovery. Compared to methods that use a speckle PSF from a narrow-band source or a diffuser-based PSF with a rolling shutter sensor, our approach uses spectrally-coded PSFs from broad-band sources and allows arbitrary sensor sampling, respectively. We demonstrate that the presented combination of the imaging approach and scene recovery method is well suited for high-speed marker based motion capture and particle image velocimetry (PIV) over long periods.


Narrated 12min technical paper talk

BibTex​

@inproceedings{Sheinin:2021:Deconv,
  title={Deconvolving Diffraction for Fast Imaging of Sparse Scenes},
  author={M. Sheinin and M. O'Toole and S. G. Narasimhan},
  booktitle={Proc. ICCP},
  year={2021},
  organization={IEEE}
}


 

Acknowledgments​

We thank Brendt Wohlberg for the support with the SPORCO Python package, Justin Macey for lending us the motion capture suit and markers, Adithya Pediredla and Dinesh Reddy for help with the motion capture experiments. This work was supported in parts by NSF Grants IIS-1900821 and CCF-1730147. Mark Sheinin was partly supported by the Andrew and Erna Finci Viterbi Foundation.

bottom of page