Workshop

Light Field Capture and Processing

ETN FPI

Chair: Atanas Gotchev, Tampere University of Technology, Finland

The European Training Network on Full Parallax Imaging (ETN-FPI) aims at harmonizing and advancing the research in the areas of plenoptics, light field and integral imaging. The workshop gathers together ETN-FPI partners who will discuss their attempts to set up, analyse and utilize various systems for light field capture and processing.

The workshop will be held on Wednesday and is open to all 3DTV participants.

Program

Opening and overview of the workhop (Atanas Gotchev)

3D Light Field Cameras in Industry and Research

Presenter: Dr. Christian Perwaß; Raytrix GmbH, Germany

Abstract: While the principle of light field cameras has been known for over 100 years, they have only recently made the step from being an object of research to becoming a tool for research and industry. The charm of light field cameras lies in the fact that 2D and 3D information can be captured in a single shot with a single camera through a single main lens. Very short exposure times, high speed recording and microscopy are therefore all in the application realm of light field cameras. The talk will give an introduction to the technology and discuss how light field cameras are applied in fields like optical metrology, particle imaging velocimetry and fluorescence microscopy.

Synchronized data capture and calibration of a large-field-of-view moving multi-camera lightfield rig

Presenters: Sandro Esquivel, Yuan Gao, Tim Michels, Luca Palmieri, Reinhard Koch; Multimedia Information Processing Group, Christian-Albrechts-Universität Kiel

Abstract: The processing of large spatio-temporal lightfields with room-size field of view requires the calibration and synchronized data capture with many video cameras in parallel. In our work we describe a large-scale 2D-scanning lightfield approach where 24 synchronized color cameras and two RGB-D cameras are aligned horizontally on a 2.5m rig that can be moved both horizontally and vertically to scan a planar lightfield of 5 square-meter size. We will discuss the issues and challenges of calibration, depth-color data capture and synchronized data processing.

Sensitivity Analysis of Time-of-Flight Sensor Fusion

Presenters: Mårten Sjöström, Sebastian Schwarz* and Roger Olsson, Realistic 3D Group, Mid Sweden University, Sweden
* Now at Nokia TECH, Tampere, Finland

Abstract: The capture of a scene’s three-dimensional extension has become essential in many applications such as manufacturing control, surveillance, and entertainment. Dedicated range cameras are often combined with common RGB-video cameras to capture the scene’s three dimensions. The imposed physical displacement of the two capture devices requires spatial calibrations, and captured depth information must in a fusion process be up-sampled to meet the resolution of the RGB camera. This presentation focuses on the consequences of capture and estimation errors of an RGB and time-of-flight camera combination, and how the errors transplanted through the system may be considered in the image and depth fusion process.

A Linear Positioning System for Light Field Capture

Presenters: Suren Vagharshakyan, Ahmed Durmush, Olli Suominen, Robert Bregovic, Atanas Gotchev; 3D Media Group, Tampere University of Technology, Finland

Abstract: We present a linear positioning system (LPS), which allows capturing wide field of view light fields with sub-pixel accuracy. We discuss a method for estimating the positions of a moving camera attached to that LPS. By comparing the estimated camera positions with the expected positions, which were calculated based on the LPS specifications, the manufacturer specified accuracy of the system, can be verified. Having this data, one can more accurately model the light field sampling process. Furthermore, we present an approach for sparsification of light field in shearlet transform domain and discuss its applicability for reconstruction of dense light fields from sparse set of cameras.

Multi-Camera Production Systems for Visual Effects in Post-Production

Presenter: Frederik Zilly; Computational Imaging Group, Fraunhofer IIS , Germany

Abstract: We present a system, which allows capturing the light field of a live action scene and thus offers full creative leeway of changing camera paths and orientation, as well as setting the focal plane in post-production. In contrast to previous approaches, the handling of the involved acquisition system is compact and thus suitable for on-set operation. Details about the test production, the underlying camera setup and post-production workflow are given in the presentation.