How to track and segment fish without human annotations: a self-supervised deep learning approach

Saleh, Alzayat, Sheaves, Marcus, Jerry, Dean, and Rahimi Azghadi, Mostafa (2024) How to track and segment fish without human annotations: a self-supervised deep learning approach. Pattern Analysis and Applications, 27. 4.

[img]
Preview
PDF (Published Version) - Published Version
Available under License Creative Commons Attribution.

Download (3MB) | Preview
View at Publisher Website: https://doi.org/10.1007/s10044-024-01227...
 
7


Abstract

Tracking fish movements and sizes of fish is crucial to understanding their ecology and behaviour. Knowing where fish migrate, how they interact with their environment, and how their size affects their behaviour can help ecologists develop more effective conservation and management strategies to protect fish populations and their habitats. Deep learning is a promising tool to analyse fish ecology from underwater videos. However, training deep neural networks (DNNs) for fish tracking and segmentation requires high-quality labels, which are expensive to obtain. We propose an alternative unsupervised approach that relies on spatial and temporal variations in video data to generate noisy pseudo-ground-truth labels. We train a multi-task DNN using these pseudo-labels. Our framework consists of three stages: (1) an optical flow model generates the pseudo-labels using spatial and temporal consistency between frames, (2) a self-supervised model refines the pseudo-labels incrementally, and (3) a segmentation network uses the refined labels for training. Consequently, we perform extensive experiments to validate our method on three public underwater video datasets and demonstrate its effectiveness for video annotation and segmentation. We also evaluate its robustness to different imaging conditions and discuss its limitations.

Item ID: 83604
Item Type: Article (Research - C1)
ISSN: 1433-755X
Keywords: Computer vision, Convolutional neural networks, Deep learning, Image and video processing, Machine learning, Underwater videos
Related URLs:
Copyright Information: This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Funders: Australian Research Council (ARC)
Date Deposited: 11 Sep 2024 23:43
Downloads: Total: 7
Last 12 Months: 7
More Statistics

Actions (Repository Staff Only)

Item Control Page Item Control Page