Πέμπτη 23 Μαρτίου 2017

Superpixel-based class-semantic texton occurrences for natural roadside vegetation segmentation

Abstract

Vegetation segmentation from roadside data is a field that has received relatively little attention in present studies, but can be of great potentials in a wide range of real-world applications, such as road safety assessment and vegetation condition monitoring. In this paper, we present a novel approach that generates class-semantic color–texture textons and aggregates superpixel-based texton occurrences for vegetation segmentation in natural roadside images. Pixel-level class-semantic textons are learnt by generating two individual sets of bag-of-word visual dictionaries from color and filter bank texture features separately for each object class using manually cropped training data. For a testing image, it is first oversegmented into a set of homogeneous superpixels. The color and texture features of all pixels in each superpixel are extracted and further mapped to one of the learnt textons using the nearest distance metric, resulting in a color and a texture texton occurrence matrix. The color and texture texton occurrences are aggregated using a linear mixing method over each superpixel and the segmentation is finally achieved using a simple yet effective majority voting strategy. Evaluations on two datasets such as video data collected by the Department of Transport and Main Roads, Queensland, Australia, and a public roadside grass dataset show high accuracy of the proposed approach. We also demonstrate the effectiveness of the approach for vegetation segmentation in real-world scenarios.



http://ift.tt/2o8JVsa


http://ift.tt/2nGS0Ik

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου