Visual attention, defined as the ability of a biological or artificial vision system to rapidly detect potentially relevant parts of a visual scene, provides a general purpose solution for low level feature detection in a vision architecture. Well considered for its universal detection behaviour, the general model of visual attention is suited for any environment but inferior to dedicated feature detectors in more specific environments. The goal of the development presented in this paper is to remedy this disadvantage by providing an adaptive visual attention model that, after its automatic tuning to a given environment during a learning phase, performs similarly well as a dedicated feature detector. The paper proposes the structure of an adaptive visual attention model derived from the saliency visual attention model. The adaptive model is characterized by parameters that act at several feature detection levels. A procedure for automatic tuning the parameters by learning from examples is proposed. The experimental examples provided show the feature selection capacity of the generic visual attention model. The proposed adaptive visual attention model represents a frame for further developments and improvements in adaptive visual attention.