Toggle Main Menu Toggle Search

Open Access padlockePrints

Incremental 2D self-labelling for effective 3D medical volume segmentation with minimal annotations

Lookup NU author(s): Matthew Anderson, Maged Habib, David Steel, Professor Boguslaw ObaraORCiD

Downloads


Licence

This work is licensed under a Creative Commons Attribution 4.0 International License (CC BY 4.0).


Abstract

© The Author(s) 2025.Background: The development and application of deep learning-based models have seen significant success in medical image segmentation, transforming diagnostic and treatment processes. However, these advancements often rely on large, fully annotated datasets, which are challenging to obtain due to the labour-intensive and costly nature of expert annotation. Therefore, we sought to explore the feasibility and efficacy of training 2D models under severe annotation constraints, aiming to optimise segmentation performance while minimising annotation costs. Methods: We propose an incremental 2D self-labelling framework for segmenting 3D medical volumes from a single annotated slice per volume. A 2D U-Net is first trained on these central slices. The model then iteratively generates and filters pseudo-labels for adjacent slices, progressively fine-tuning itself on an expanding dataset. This process is repeated until the entire training set is pseudo-labelled to produce the final model. Results: On brain MRI and liver CECT datasets, our self-labelling approach improved segmentation performance compared to using only the sparse ground-truth data, increasing the Dice Similarity Coefficient and Intersection over Union by up to 15.95% and 26.75%, respectively. It also improved 3D continuity, reducing the 95th percentile Hausdorff Distance from 69.88 mm to 36.46 mm. Parameter analysis revealed that a gradual propagation of high-confidence pseudo-labels was most effective. Conclusion: Our framework demonstrates that a computationally efficient 2D model can be leveraged through self-labelling to achieve robust 3D segmentation performance and coherence from extremely sparse annotations, offering a viable solution to reduce the annotation burden in medical imaging.


Publication metadata

Author(s): Anderson M, Habib M, Steel DH, Obara B

Publication type: Article

Publication status: Published

Journal: BMC Medical Imaging

Year: 2025

Volume: 25

Issue: 1

Online publication date: 07/11/2025

Acceptance date: 13/10/2025

Date deposited: 24/11/2025

ISSN (electronic): 1471-2342

Publisher: BioMed Central Ltd

URL: https://doi.org/10.1186/s12880-025-01991-9

DOI: 10.1186/s12880-025-01991-9

Data Access Statement: The code for this paper is available at: https://github.com/muanderson/Incre mental2D-SelfLabel3D. The MSD: Task01_BrainTumour [31] and MSD: Task04_H ippocampus [31] datasets are available at https://medicaldecathlon.com/. The LiTS 17 [32] dataset is available at https://academictorrents.com/.

PubMed id: 41204141


Altmetrics

Altmetrics provided by Altmetric


Funding

Funder referenceFunder name
Bayer AG
Engineering and Physical Sciences Research Council
EP/L015358/1EPSRC

Share