Browse by author
Lookup NU author(s): Dr Shidong WangORCiD
Full text for this publication is not currently held within this repository. Alternative links are provided below where available.
© 2026 Elsevier B.V.Cross-domain Few-shot Medical Image Segmentation (CD-FSMIS) typically involves pre-training on a large-scale source domain dataset (e.g., natural image dataset) before transferring to a target domain with limited data for pixel-wise segmentation. However, due to the significant domain gap between natural images and medical images, existing Few-shot Segmentation (FSS) methods suffer from severe performance degradation in cross-domain scenarios. We observe that using only annotated masks as cross-domain cues is insufficient, while rich textual information can effectively establish knowledge relationships between visual instances and language descriptions, mitigating domain shift. To address this, we propose a plug-in Cross-domain Text-guided (CD-TG) module that leverages text-domain alignment to construct a new alignment space for domain generalization. This plug-in module consists of two components, including: (1) Text Generation Unit that utilizes the GPT-4 question-answering system to generate standardized category-level textual descriptions, and (2) Semantic-guided Unit that aligns visual features with textual embeddings while incorporating existing mask information. We integrate this plug-in module into five mainstream FSS methods and evaluate it on four widely used medical image datasets, and the experimental results demonstrate its effectiveness. Code is available at https://github.com/Lilacis/CD_TG .
Author(s): Song F, Bo Y, Wang S, Long Y, Zhang H
Publication type: Article
Publication status: Published
Journal: Pattern Recognition Letters
Year: 2026
Volume: 201
Pages: 66-72
Print publication date: 01/03/2026
Online publication date: 13/01/2026
Acceptance date: 10/01/2026
ISSN (print): 0167-8655
ISSN (electronic): 1872-7344
Publisher: Elsevier
URL: https://doi.org/10.1016/j.patrec.2026.01.009
DOI: 10.1016/j.patrec.2026.01.009
Altmetrics provided by Altmetric