Combining UAV-imagery and machine learning for wildlife conservation
Semi-arid savannas are endangered by changes in the fragile equilibrium between rainfalls, fires and grazing pressure exerted by wildlife or cattle. To avoid bush encroachment and the decline of perennial grass, land managers must pay attention to keep the amount of cattle and wildlife in balance with the grass availability. In large farms and conservation parks, to estimate the animal populations is therefore an important management aspect. Traditional methods of animal census – such as transect counts from a helicopter, or mark / recapture – are too expensive and laborious to be conducted on a regular basis. In this context unmanned aerial vehicles (UAVs) appear as an interesting tool for animals detection. They can be easily deployed, for lower cost and an increased safety. The drawback is that it is difficult to visually interpret the large number of very high resolution (VHR) images that they acquire. The recent advances in machine learning techniques could allow to automate the detection of animals in these aerial images. This project aims to implementing such algorithms in order to investigate the feasibility and potential benefits of combining machine learning and UAVs for animals detection. This study uses an image dataset acquired in the Kuzikus Wildlife Reserve in Namibia and a ground truth acquired through crowd-sourcing. The machine learning techniques involved include Bags of visual Words, exemplar SVMs and active learning. The promising results show that recall rates in the range of 60 to 80% are possible, if a low precision (5 to 20%) is accepted. The study also discusses parameters related to the data acquisition, such as the image resolution and the time of the day when the images are acquired.
PDMNicolasRey_final08.2016.pdf
openaccess
1.98 MB
Adobe PDF
78a1921ec16972de245f96e123bbb4e7