TVCG Session on VR

  • Full Conference Pass (FC) Full Conference Pass (FC)
  • Full Conference One-Day Pass (1D) Full Conference One-Day Pass (1D)
  • Basic Conference Pass (BC) Basic Conference Pass (BC)
  • Student One-Day Pass (SP) Student One-Day Pass (SP)

Date: Wednesday, December 5th
Time: 4:15pm - 4:41pm
Venue: G402 (4F, Glass Building)


Summary: his paper introduces a novel photometric compensation technique for inter-projector luminance and chrominance variations. Although it sounds as a classical technical issue, to the best of our knowledge there is no existing solution to alleviate the spatial non-uniformity among strongly heterogeneous projectors at perceptually acceptable quality. Primary goal of our method is increasing the perceived seamlessness of the projection system by automatically generating an improved and consistent visual quality. It builds upon the existing research of multi-projection systems, but instead of working with perceptually non-uniform color spaces such as CIEXYZ, the overall computation is carried out using the RLab color appearance model which models the color processing in an adaptive, perceptual manner. Besides, we propose an adaptive color gamut acquisition, spatially varying gamut mapping, and optimization framework for edge blending. The paper describes the overall workflow and detailed algorithm of each component, followed by an evaluation validating the proposed method. The experimental results both qualitatively and quantitatively show the proposed method significant improved the visual quality of projected results of a multi-projection display with projectors with severely heterogeneous color processing.

Author(s)/Speaker(s):
Moderator: Lecturer(s):

Author(s)/Speaker(s) Bio:

Date: Wednesday, December 5th
Time: 4:41pm - 5:07pm
Venue: G402 (4F, Glass Building)


Summary: The quality of every dynamic multi-projection mapping system is limited by the quality of the projector to tracking device calibration. Common problems with poor calibration result in noticeable artifacts for the user, such as ghosting and seams. In this work we introduce a new, fully automated calibration algorithm that is tailored to reduce these artifacts, based on consumer-grade hardware. We achieve this goal by repurposing a structured-light scanning setup. A structured-light scanner can generate 3D geometry based on a known intrinsic and extrinsic calibration of its components (projector and RGB camera). We revert this process by providing the resulting 3D model to determine the intrinsic and extrinsic parameters of our setup (including those of a variety of tracking systems). Our system matches features and solves for all parameters in a single pass while respecting the lower quality of our sensory input.

Author(s)/Speaker(s):
Moderator: Lecturer(s):

Author(s)/Speaker(s) Bio:

Date: Wednesday, December 5th
Time: 5:07pm - 5:33pm
Venue: G402 (4F, Glass Building)


Summary: Reconstructing dense, volumetric models of real-world 3D scenes is important for many tasks, but capturing large scenes can take significant time, and the risk of transient changes to the scene goes up as the capture time increases. These are good reasons to want instead to capture several smaller sub-scenes that can be joined to make the whole scene. Achieving this has traditionally been difficult: joining sub-scenes that may never have been viewed from the same angle requires a high-quality camera relocaliser that can cope with novel poses, and tracking drift in each sub-scene can prevent them from being joined to make a consistent overall scene. Recent advances, however, have significantly improved our ability to capture medium-sized sub-scenes with little to no tracking drift: real-time globally consistent reconstruction systems can close loops and re-integrate the scene surface on the fly, whilst new visual-inertial odometry approaches can significantly reduce tracking drift during live reconstruction. Moreover, high-quality regression forest-based relocalisers have recently been made more practical by the introduction of a method to allow them to be trained and used online. In this paper, we leverage these advances to present what to our knowledge is the first system to allow multiple users to collaborate interactively to reconstruct dense, voxel-based models of whole buildings using only consumer-grade hardware, a task that has traditionally been both time-consuming and dependent on the availability of specialised hardware. Using our system, an entire house or lab can be reconstructed in under half an hour and at a far lower cost than was previously possible.

Author(s)/Speaker(s):
Moderator: Lecturer(s):

Author(s)/Speaker(s) Bio:

Date: Wednesday, December 5th
Time: 5:33pm - 5:59pm
Venue: G402 (4F, Glass Building)


Summary: We describe a system which corrects dynamically for the focus of the real world surrounding the near-eye display of the user and simultaneously the internal display for augmented synthetic imagery, with an aim of completely replacing the user prescription eyeglasses. The ability to adjust focus for both real and virtual will be useful for a wide variety of users, but especially for users over 40 years of age who have limited accommodation range. Our proposed solution employs a tunable-focus lens for dynamic prescription vision correction, and a varifocal internal display for setting the virtual imagery at appropriate spatially registered depths. We also demonstrate a proof of concept prototype to verify our design and discuss the challenges to building an auto-focus augmented reality eyeglasses for both real and virtual.

Author(s)/Speaker(s):
Moderator: Lecturer(s):

Author(s)/Speaker(s) Bio:

 

Back

/jp/attendees/doctoral-consortium /jp/attendees/production-gallery