|Title||Generation of Layered Depth Images from Fused Multiview Depth Maps|
|Tutor||Dipl.-Ing. Engin Kurutepe|
|Professor||Dr.-Ing. Thomas Sikora|
|Abstract||Concerning the coding of multiview video data a common approach is using layered images to store video-depth information. Under such setting a common usage pattern would be to create reprojections of the scene to a different viewpoint. When reprojecting single-layered representations, holes occur where previously occluded regions are revealed. Such is not the case in a layered image approach. In a multiview setting, data from many views may be fused to gain the most complete reconstruction of the scene that is possible. |
A method is needed that converts existing single-layered video-depth multiview frame images to a layered representation that is robust to noise and other disturbances, such as regions of non-agreement between views.
In this thesis a prototype method has been developed using free-space constraints to detect occlusions and also a segmentation method to create a list of masks which is able to give an accurate, but not optimally complete reconstruction of the fused scene. Furthermore measurements of quality on the layers of the reconstructed image layers will be given. On the basis of the built reconstruction it is possible to create a reprojected image of the scene which does not exhibit the occlusion problem.
|Key words||multiview video data, coding, layered image approach, view projection, clustering, sampling, layers, segmentaion|