top of page

Traditional virtual reality environments are based on rendering all
the surroundings using 3D models of objects. This requires graphically modelling the whole environment, which is usually a long and tedious task. 360-degree cameras that have recently become commercially available might be able to change that situation, as image-based virtual reality requires no graphical modelling and therefore has the promise of being faster with the results resembling the real world more accurately. The downside of this approach is the requirement to create a separate image for each possible location in the scene that is being captured, thus resulting in a large number of images that need to be labelled, so that navigation between images would become possible. This project investigates possible methods to create such maps semi-automatically and provides an interface to display the virtual reality environments so created.

INTRODUCTION

DEMO
INTRODUCTION

This is a small demo about the 360-degree view rendering. You can look around by dragging the image.

OVERVIEW
SEMI-AUTOMATIC METHOD FOR CREATING INTERACTIVE

OVERVIEW

STITCHING

MAP GENERATION

RENDERING

The standard computer graphics pipeline can be used for generating new views from the equirectangular images. The image is texture-mapped onto a 3D sphere and the camera is placed inside it. Different images can now be generated by specifying a view angle and rendering the scene.

Video sequences of equirectangular images are compared to each other and the similarity matrices are computed. The matrices are used as input for a novel sequence mapping algorithm that is based on dynamic time warping. The algorithm greedily places sequences side-by-side, so that the cost would be minimal at each step. As a result of this algorithm, a map used for navigating the scene is created.

After capturing spherical images using the 360-degree camera they need to be stitched together. Overlapping areas between the spherical images are used to create correspondences between the images, and a new, equirectangular image is generated. The new image is later used as an input for the rendering process.

bottom of page