Generation of Video Metadata Supporting Video-GIS Integration (TA-P8)
Author(s) :
In-Hak Joo (Electronics and Telecommunications Research Institute, South Korea)
Tae-Hyun Hwang (Electronics and Telecommunications Research Institute, South Korea)
Kyung-Ho Choi (Electronics and Telecommunications Research Institute, South Korea)
Abstract : Geospatial information expressed by video can provide more realistic and comprehensible information that cannot be obtained from digital map. We introduce video and integrate it to GIS by supporting video-map cross reference and bi-directional search. To support content-based search of geospatial object appearing on video, we generate video metadata that has necessary information of geospatial object. The most important part of the video metadata is outline of object on video frame. The outline of an object varies frame to frame, and thus should be generated for every frame in which the object appears. Because the object moves rapidly between video frames collected with 1-second interval, object tracking cannot be easily done by simple algorithm. In this paper, we devise a semi-automatic object tracking method by combining photogrammetric solution to image-based method.

Menu