SpringerOpen Newsletter

Receive periodic news and updates relating to SpringerOpen.

This article is part of the series Color in Image and Video Processing.

Open Access Open Badges Research Article

Colour Vision Model-Based Approach for Segmentation of Traffic Signs

Xiaohong Gao1*, Kunbin Hong1, Peter Passmore1, Lubov Podladchikova2 and Dmitry Shaposhnikov2

Author Affiliations

1 School of Computing Science, Middlesex University, The Burroughs, Hendon, London NW4 4BT, UK

2 Laboratory of Neuroinformatics of Sensory and Motor Systems, A.B. Kogan Research Institute for Neurocybernetics, Rostov State University, Rostov-on-Don 344090, Russia

For all author emails, please log on.

EURASIP Journal on Image and Video Processing 2008, 2008:386705  doi:10.1155/2008/386705

The electronic version of this article is the complete one and can be found online at: http://jivp.eurasipjournals.com/content/2008/1/386705

Received:28 July 2007
Revisions received:25 October 2007
Accepted:11 December 2007
Published:13 December 2007

© 2008 The Author(s).

This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


This paper presents a new approach to segment traffic signs from the rest of a scene via CIECAM, a colour appearance model. This approach not only takes CIECAM into practical application for the first time since it was standardised in 1998, but also introduces a new way of segmenting traffic signs in order to improve the accuracy of colour-based approach. Comparison with the other CIE spaces, including CIELUV and CIELAB, and RGB colour space is also carried out. The results show that CIECAM performs better than the other three spaces with 94%, 90%, and 85% accurate rates for sunny, cloudy, and rainy days, respectively. The results also confirm that CIECAM does predict the colour appearance similar to average observers.

Publisher note

To access the full article, please see PDF.