ISSN: 2685-9572 Buletin Ilmiah Sarjana Teknik Elektro
Vol. 7, No. 3, September 2025, pp. 541-553
Transfer Learning Models for Precision Medicine: A Review of Current Applications
Yuri Pamungkas 1, Myo Min Aung 2, Gao Yulan 3, Muhammad Nur Afnan Uda 4, Uda Hashim 5
1 Department of Medical Technology, Institut Teknologi Sepuluh Nopember, Indonesia
2 Department of Mechatronics Engineering, Rajamangala University of Technology Thanyaburi, Thailand
3 Department of Mechanical Engineering, Guizhou University of Engineering Science, China
4, 5 Department of Electrical Electronic Engineering, Universiti Malaysia Sabah, Malaysia
ARTICLE INFORMATION | ABSTRACT | |
Article History: Received 17 July 2025 Revised 19 August 2025 Accepted 25 September 2025 | In recent years, Transfer Learning (TL) models have demonstrated significant promise in advancing precision medicine by enabling the application of machine learning techniques to medical data with limited labeled information. TL overcomes the challenge of acquiring large, labeled datasets, which is often a limitation in medical fields. By leveraging knowledge from pre-trained models, TL offers a solution to improve diagnostic accuracy and decision-making processes in various healthcare domains, including medical imaging, disease classification, and genomics. The research contribution of this review is to systematically examine the current applications of TL models in precision medicine, providing insights into how these models have been successfully implemented to improve patient outcomes across different medical specialties. In this review, studies sourced from the Scopus database, all published in 2024 and selected for their "open access" availability, were analyzed. The research methods involved using TL techniques like fine-tuning, feature-based learning, and model-based transfer learning on diverse datasets. The results of the studies demonstrated that TL models significantly enhanced the accuracy of medical diagnoses, particularly in areas such as brain tumor detection, diabetic retinopathy, and COVID-19 detection. Furthermore, these models facilitated the classification of rare diseases, offering valuable contributions to personalized medicine. In conclusion, Transfer Learning has the potential to revolutionize precision medicine by providing cost-effective and scalable solutions for improving diagnostic capabilities and treatment personalization. The continued development and integration of TL models in clinical practice promise to further enhance the quality of patient care. | |
Keywords: Transfer Learning; Precision Medicine; Medical Imaging; Deep Learning; Personalized Healthcare | ||
Corresponding Author: Yuri Pamungkas, Department of Medical Technology, Institut Teknologi Sepuluh Nopember, Indonesia. Email: yuri@its.ac.id | ||
This work is open access under a Creative Commons Attribution-Share Alike 4.0 | ||
Document Citation: Y. Pamungkas, M. M. Aung, G. Yulan, M. N. A. Uda, and U. Hashim, “Transfer Learning Models for Precision Medicine: A Review of Current Applications,” Buletin Ilmiah Sarjana Teknik Elektro, vol. 3, no. 1, pp. 541-553, 2025, DOI: 10.12928/biste.v7i3.14286. | ||
Precision medicine seeks to deliver tailored healthcare grounded in personal attributes such as genetics, lifestyle, and environment [1]. However, one of the significant challenges in implementing precision medicine is the restricted access to high-quality, extensive data collections [2]. While genomic data, clinical records, and medical imaging hold valuable insights for personalized treatment, the scarcity of comprehensively labeled information hinders the development of robust predictive models [3]. Traditional machine learning (ML) methods often struggle to generalize across patient populations and disease types due to data limitations [4]. Furthermore, protection of sensitive information along with regulatory issues further complicate obtaining and distributing healthcare information across institutions.
To address these difficulties, Transfer Learning (TL) has arisen as a viable approach. TL leverages knowledge learned from a large, well-curated dataset to improve the performance of models trained on smaller, more specialized datasets [5]. In the context of precision medicine, TL allows for the reuse of pretrained models across different medical domains, including genomics, health-related imaging, and evaluation of clinical datasets [6]. This strategy is especially beneficial for addressing the scarcity of data by enabling knowledge transfer from related tasks or datasets. By applying TL, models can improve diagnostic accuracy, reduce training time, and facilitate the integration of multi-modal data for better clinical decision processes in healthcare [7].
Recent progress in TL methodologies has greatly enhanced the forefront of medical AI. Researchers have effectively implemented TL across multiple healthcare fields, such as genomics, in which pretrained models on extensive genomic datasets are adapted to target particular categories of diseases [8]. In medical imaging, TL has been used to augment small datasets, enabling improved lesion detection and image segmentation [9]. Additionally, TL has shown promise in integrating multi-omics data, where models initially developed on a single omics dataset (for example in genomics) are repurposed for another domain (for instance proteomics or metabolomics) [10]. These breakthroughs are setting the stage for more accurate and scalable solutions in precision medicine.
The originality of this study resides in its comprehensive exploration of TL applications across multiple domains of precision medicine, synthesizing recent findings to highlight the most promising strategies for model development. The contribution of this review is twofold: first, it consolidates the current state of TL applications in precision medicine; and second, it identifies the key challenges that remain, such as negative transfer and data heterogeneity. Furthermore, it provides insight into future directions that could enable further advancements, such as federated learning and the use of self-supervised models for pretraining on unlabeled medical data. By summarizing the cutting-edge applications of TL in precision medicine and pointing out the challenges and future research areas, this review aims to guide the development of more efficient, scalable, and adaptable machine learning models for personalized healthcare.
Transfer Learning (TL) represents an artificial intelligence method which allows models or features derived from large, general datasets to be adapted and applied to smaller, task-specific datasets [11]. This method proves especially advantageous in precision medicine, since obtaining extensive annotated datasets is frequently difficult and expensive [12]. By shifting insights from a data-rich source domain to a target domain constrained by limited information, TL enables the development of accurate and efficient models without requiring extensive data collection for the target task [13]. This is particularly crucial in medical fields where datasets can be sparse and expensive, for instance in uncommon diseases or distinct patient cohorts [14]. Additionally, TL can help overcome the challenges of data heterogeneity in medical domains, where datasets can vary significantly due to differences in population characteristics, healthcare systems, or even data collection methods [15]. By transferring knowledge across related domains, TL allows models to better generalize and adapt to the nuances of diverse patient groups, ensuring that predictive models remain robust and effective when applied to different clinical contexts [16]. This flexibility is crucial to enhance diagnostic precision and refine individualized therapeutic approaches in real-world medical settings [17].
When dealing with labeled data, Transfer Learning methods often fall under inductive transfer learning. This approach is used when the model is applied to a target domain where both input data and labeled outputs are available [18]. In precision medicine, this can be seen in applications such as disease diagnosis or patient outcome prediction where existing datasets (e.g., from clinical trials) contain both the inputs (patient data) and the corresponding outputs (disease labels or clinical outcomes). Inductive Transfer Learning enables models to generalize from these datasets to predict on unseen medical data with higher accuracy [19]. On the other hand, unannotated datasets within healthcare fields necessitate unsupervised transfer learning. This technique is employed when the target domain lacks labeled data but still contains useful input data (e.g., raw medical images or genomic data) [20]. The main objective is to derive informative attributes or abstractions out of these datasets without the necessity for explicit annotations, which can be particularly useful in genomics and medical imaging scenarios in which labeled information is frequently limited [21]. Furthermore, transductive transfer learning is implemented when a model is shifted to a related domain but where the target data may be partially labeled or consists of a small number of labeled samples, helping to fine-tune the model with minimal labeled data [22].
In the context of Methods in Transfer Learning, various techniques can be applied to achieve better performance in precision medicine. These methods can be categorized as follows:
Every one of these methods serves a crucial function in enabling the adaptability of models to diverse medical data, facilitating the accurate prediction of disease outcomes, patient care, and therapeutic interventions, even in the face of limited labeled data [27]. Thus, Transfer Learning demonstrates strong potential in advancing precision medicine through enhancing computational effectiveness and forecast precision in healthcare applications can be seen in Figure 1 [28].
Figure 1. Types of Transfer Learning Models
The application of Transfer Learning (TL) in precision medicine has attracted considerable focus in the past few years, demonstrating its capacity to enhance medical results by providing more accurate diagnoses, personalized treatments, and enhanced decision-making processes [29][30]. This section reviews the current applications of TL models in precision medicine, drawing insights from recent research. The articles used in this review were sourced from the Scopus database, with a focus on publications released in 2024. Only "open access" articles were selected to guarantee openness and clarity of the study. To identify relevant studies, the following search query was used ("transfer learning" OR "pretrained model" OR "fine-tuning") AND ("precision medicine" OR "personalized medicine") AND ("machine learning" OR "deep learning" OR "artificial intelligence"). The search results yielded a wide range of studies that demonstrate how TL techniques are applied across various areas of precision medicine, including medical imaging, genomics, disease classification, and clinical data analysis. The following are journal articles that discuss applications of transfer learning models in precision medicine which are presented in Table 1 and Figure 2.
Table 1. Selected Articles Related to Transfer Learning Models for Precision Medicine
Ref | Author | Year | Application Area | Dataset Source | Model Used | Transfer Method |
Amiri, et al. | 2024 | COVID-19 Decision Support System | COVID-19 World Dataset (Our World in Data), ECDC, GISAID | Neural Networks (Transfer Learning), Multi-attribute Decision-Making (MADM) | Fine-tuning (Transfer Learning) | |
Haque, et al. | 2024 | Medical Imaging (Leukemia Diagnostics) | Kaggle Datasets (C-NMC, Leukemia Dataset 0.2) | AlexNet, Inception-ResNet, XceptionNet, RetinaNet, CenterNet, DCNN | Fine-tuning, Feature-based | |
Mohan, et al. | 2024 | Medical Imaging (COVID-19 Detection from Chest X-rays) | COVID-19 Radiography Database (Kaggle), Chest X-ray Images Pneumonia and COVID-19 (Mendeley) | VGG16, Inception ResNet V2, CNN | Fine-tuning, Transfer Learning, CNN from Scratch | |
Houssein, et al. | 2024 | Medical Imaging (Skin Cancer Classification) | HAM10000, ISIC-2019 | Deep Convolutional Neural Network (DCNN), VGG16, VGG19, DenseNet121, DenseNet201, MobileNetV2 | Fine-tuning, Transfer Learning | |
Duan, et al. | 2024 | Medical Imaging (Meningioma Ki-67 Prediction) | MRI Images (318 cases) | Deep Transfer Learning (DTL), CNN (ResNet50) | Fine-tuning, Model-based | |
Ma, et al. | 2024 | Medical Imaging (Autism Spectrum Disorder Classification) | Shenzhen Children's Hospital | Contrastive Variational AutoEncoder (CVAE), Random Forest | Transfer Learning, Fine-tuning | |
Natha, et al. | 2024 | Medical Imaging (Brain Tumor Detection) | Kaggle Brain Tumor MRI Dataset | AlexNet, VGG19, Stack Ensemble Transfer Learning (SETL_BMRI) | Fine-tuning, Ensemble Learning | |
Rasa, et al. | 2024 | Medical Imaging (Brain Tumor Detection) | BRATS 2015, Brain Tumor Classification Dataset (Kaggle) | VGG16, ResNet50, MobileNetV2, DenseNet201, EfficientNetB3, InceptionV3 | Fine-tuning | |
Otaibi, et al. | 2024 | Medical Imaging (Brain Tumor Detection) | Multi-class Brain Tumor MRI Image Dataset (21,672 images) | 2D-CNN, VGG16, k-NN Classifier | Fine-tuning, Transfer learning | |
Moran, et al. | 2024 | Medical Imaging (COPD Detection using ECG) | ECG signals (COPD and Healthy subjects) | Xception, VGG-19, InceptionResNetV2, DenseNet-121 | Fine-tuning and Transfer Learning | |
Kumar, et al. | 2024 | Medical Imaging (Alzheimer's Disease Diagnosis) | Kaggle Dataset (12,936 MRI images) | GoogLeNet, FFNN | Fine-tuning, Feature extraction | |
Azizian, et al. | 2024 | Genomics (miRNA-protein interactions) | RBPSuite, ENCODE, EVPsort, CLASH | Bi-LSTM, CNN, Cosine Similarity | Transfer learning, Cosine similarity | |
Madduri, et al. | 2024 | Medical Imaging (Diabetic Eye Disease detection) | DRISHTI-GS, Messidor-2, Messidor, Kaggle cataract dataset | Modified ResNet-50, DenseUNet | Two-phase transfer learning (ResNet-50 for classification, DenseUNet for segmentation) | |
Gore, et al. | 2024 | Disease Classification (Non-communicable diseases) | NCBI GEO (GSE datasets) | Variational Autoencoder (VAE) with transfer learning from CancerNet | Transfer learning (CancerNet to NCDs) | |
Taghizadeh, et al. | 2024 | EEG Motor Imagery Classification (Brain-Computer Interface) | Physionet MI Dataset | 1D-CNN with Semi-deep Fine-tuning | Transfer learning with feature-extracted data, fine-tuning pre-trained model | |
Raza, et al. | 2024 | Medical Imaging (Diabetic Retinopathy, MRI Brain Tumor) | Diabetic Retinopathy Dataset, MRI Brain Tumor Dataset | Mobile-Net, CNN, PSO-Optimized | Transfer learning with Particle Swarm Optimization (PSO) and Constriction Factor | |
Ansari, et al. | 2024 | Medical Imaging (Lung Cancer Detection) | LIDC-IDRI | ResNet-50, VGG-16, ResNet-101, VGG-19, DenseNet-201, EfficientNet-B4 | Transfer learning with hyperparameter tuning | |
Alturki, et al. | 2024 | Clinical Data (Chronic Kidney Disease Prediction) | UCI CKD dataset | XGBoost, Random Forest, Extra Trees Classifier (TrioNet ensemble) | KNN imputer for missing values, SMOTE for class imbalance | |
Jakkaladiki, et al. | 2024 | Medical Imaging (Breast Cancer Diagnosis) | BreakHis (Kaggle), Wisconsin Breast Cancer (UCI) | CNN, DenseNet, Hybrid Transfer Learning | Transfer learning with attention mechanism | |
Ragab, et al. | 2024 | Medical Imaging (COVID-19 detection from chest X-rays) | COVID-ChestXRay Dataset | DenseNet121, Autoencoder-LSTM, Firefly Algorithm (FFA) | Hybrid transfer learning with FFA for hyperparameter optimization | |
Koshy, et al. | 2024 | Medical Imaging (Breast Cancer Histopathology Classification) | BreaKHis | ResNet-18, CNN, Levenberg–Marquardt Optimization | Transfer learning with fine-tuning | |
Salinas, et al. | 2024 | Emotion Recognition (Driver Emotion) | CARLA Simulator (Simulated Driving Data) | CNN (VGG16, Inception V3, EfficientNet) | Transfer learning with fine-tuning | |
Li, et al. | 2024 | Clinical Data (Pediatric Knowledge Extraction) | Mass General Brigham (MGB), Boston Children’s Hospital (BCH) | MUGS (Multisource Graph Synthesis), SVD | Transfer learning, Graph-based feature engineering | |
Wang, et al. | 2024 | Clinical Data (Epilepsy Recognition) | University of Bonn EEG Dataset | Multi-View Transfer Learning (MVTL-LSR), CNN | Multi-view & transfer learning with privacy protection | |
Enguita, et al. | 2024 | Genomics (DNA Methylation) | EWAS Data Hub (Illumina 450K and EPIC arrays) | Autoencoders (NCAE), Deep Neural Networks (DNN) | Transfer learning, NCAE embedding | |
Roy, et al. | 2024 | Clinical Data (Human Activity Detection) | WISDM (Wireless Sensor Data Mining Lab dataset) | Semi-Supervised Learning (SSL), k-means, GMM | On-device learning with sparse labeling and clustering | |
Ragab, et al. | 2024 | Medical Imaging (Colorectal Cancer Detection) | Warwick-QU Dataset | Dense-EfficientNet, Slime Mould Algorithm (SMA), Deep Hopfield Neural Network (DHNN) | Transfer learning, hyperparameter optimization | |
Hoseny, et al. | 2024 | Medical Imaging (Diabetic Retinopathy Classification) | Kaggle Dataset (Diabetic Retinopathy Images) | VGG16, CNN, AE, CLAHE | Transfer learning with data cleansing and enhancement filters | |
Siddique, et al. | 2024 | Medical Imaging (Tumor Classification) | Kaggle (Brain Tumor Images) | Inception-V3, CNN | Transfer learning with Particle Swarm Optimization (PSO) | |
Xue, et al. | 2024 | Clinical Data (Parkinson’s Disease Severity Prediction) | Parkinson's Telemonitoring Dataset | Random Forest (RF), Shapley Value, Game-based Transfer | Patient-specific Game-based Transfer (PSGT), Instance Transfer | |
Kunjumon, et al. | 2024 | Medical Imaging (Esophageal Cancer Diagnosis) | Kaggle Endoscopic Image Dataset | Inception-ResNet V2, CNN | Transfer learning with fine-tuning | |
Ajani, et al. | 2024 | Medical Imaging (COVID-19 Screening) | COVID-19 Chest X-ray (CXR), CT Dataset | GoogleNet, SqueezeNet, CNN | Transfer learning with fine-tuning | |
Djaroudib, et al. | 2024 | Medical Imaging (Skin Cancer Diagnosis) | Kaggle (HAM10000, Skin Cancer MNIST) | VGG16, CNN | Transfer learning with fine-tuning | |
Alzubaidi, et al. | 2024 | Medical Imaging (Shoulder Implant Classification) | UCI Shoulder Implant X-ray Dataset | Xception, InceptionResNetV2, MobileNetV2, EfficientNet, DarkNet19 | Self-Supervised Pertaining (SSP), Transfer learning | |
Hong, et al. | 2024 | Clinical Data (Risk Prediction) | FOS, ARIC, MESA, REGARDS | Logistic Regression, Translasso | Federated learning, Transfer learning, ROSE | |
Alnuaimi, et al. | 2024 | Medical Imaging (Skin Diseases Detection) | Kaggle DermNet, Google Images, Atlas Dermatology | MobileNet, DenseNet121, CNN | Transfer learning with fine-tuning | |
Xiang, et al. | 2024 | Clinical Data (Predictive Modeling) | PM2.5, IHS, HUA, Wine, eICU | Random Forest (RF), Federated Learning (FL), Model Averaging | Federated Transfer Learning (FTRF) | |
Khouadja, et al. | 2024 | Medical Imaging (Lung Cancer Diagnosis) | Military Hospital of Tunis (DICOM CT scans) | ResNet50, InceptionV3, VGG16 | Transfer learning with pre-trained 3D ResNet models from Tencent MedicalNet | |
Sambyal, et al. | 2024 | Medical Imaging (Calibration of Deep Neural Networks) | Diabetic Retinopathy, Histopathologic Cancer, COVID-19 datasets | ResNet18, ResNet50, WideResNet | Transfer learning, Rotation-based self-supervised learning (SSL) | |
Benbakreti, et al. | 2024 | Medical Imaging (Breast Cancer Classification) | Inbreast, MIAS, DDSM | ResNet18, AlexNet, InceptionV3 | Transfer learning with pre-trained models |
The section presents a comprehensive review of how Transfer Learning (TL) models are currently being applied across various fields in precision medicine. These models are crucial for advancing medical diagnostics, particularly in contexts where data scarcity or complexity poses significant challenges. One of the key areas of medical imaging is the application of TL for diagnostic purposes, such as COVID-19 detection and cancer classification. Amiri et al. [31] developed a COVID-19 decision support system using neural networks, leveraging datasets from sources like Our World in Data and GISAID, with a fine-tuning method. Similarly, Ragab et al. [50] focused on chest X-ray images for COVID-19 detection using DenseNet121 and Autoencoder-LSTM with hybrid transfer learning for hyperparameter optimization. This shows the broad utility of TL in diagnosing infectious diseases in urgent public health situations. Additionally, for skin cancer classification, Houssein et al. [34] applied deep convolutional neural networks (DCNN) and VGG models, showcasing the effectiveness of TL in dermatological diagnostics using datasets like HAM10000 and ISIC-2019. TL models are also widely used for brain tumor detection and leukemia diagnostics. Natha et al. [37] and Rasa et al. [38] explored TL for brain tumor detection, using models such as ResNet50, VGG16, and MobileNetV2 on MRI datasets from Kaggle and BRATS 2015. Their results underscore the potential of TL to enhance tumor detection and improve diagnostic accuracy. For leukemia detection, Haque et al. [32] employed fine-tuning and feature-based TL methods using AlexNet and Inception-ResNet on datasets from Kaggle, demonstrating how TL helps in processing complex medical imaging data for blood cancer diagnoses. In the field of genomics, Azizian et al. [42] utilized TL to study miRNA-protein interactions, applying Bi-LSTM and CNN models with cosine similarity methods. This approach facilitates the analysis of genomic data, which often suffers from sparsity. Similarly, in chronic kidney disease prediction, Alturki et al. [48] used XGBoost and random forest classifiers to make predictions based on clinical datasets, demonstrating how TL is applied to clinical data to forecast disease outcomes.
Clinical data is another area where TL has shown significant promise, particularly in predictive modeling. Wang et al. [55] employed federated learning combined with TL to analyze risk factors for chronic diseases using datasets like FOS, ARIC, and MESA, ensuring data privacy while still enhancing predictive accuracy. Xue et al. [60] applied TL with Random Forest and Shapley value models to predict Parkinson’s disease severity using telemonitoring datasets, illustrating the integration of TL in clinical decision-making processes. In diabetic retinopathy classification, Hoseny et al. [58] used transfer learning with fine-tuning on VGG16 and CNN models, focusing on the Kaggle diabetic retinopathy dataset. Their research shows how TL helps in processing and enhancing medical images for ophthalmic disease diagnosis. Similarly, in breast cancer diagnosis, Jakkaladiki et al. [49] used hybrid TL models combined with attention mechanisms, demonstrating how TL can aid in cancer detection from imaging datasets like BreakHis and Wisconsin Breast Cancer. Several other studies also demonstrate TL applications in disease classification, emotion recognition, and activity detection. For example, Kunjumon et al. [61] applied transfer learning with fine-tuning to diagnose esophageal cancer using endoscopic image datasets. Similarly, Ajani et al. [62] focused on COVID-19 screening from chest X-rays and CT images using GoogleNet and SqueezeNet, further emphasizing the role of TL in improving diagnostic tools for infectious diseases. Lastly, federated learning is becoming increasingly important in medical applications as demonstrated by Xiang et al. [67], who applied federated transfer learning for predictive modeling in clinical data, further enhancing the utility of TL across decentralized datasets while maintaining privacy.
Figure 2. Applications of Transfer Learning Models in Precision Medicine
Transfer learning (TL) offers significant advantages in precision medicine, particularly in addressing the constraints arising from scarce medical datasets. One of its key advantages is its capacity to exploit pretrained models built on large, heterogeneous datasets, which can be further tuned or customized to particular clinical applications. This approach lessens the requirement for large amounts of labeled data in target domains, which is often difficult and expensive to obtain in healthcare contexts, such as in rare diseases or specialized imaging datasets [71][72]. By transferring knowledge from a data-rich source domain to a data-limited target domain, TL enables the development of highly accurate models without requiring extensive data collection for every medical task. Another strength of TL in precision medicine is its ability to enhance the generalizability of machine learning models. Fine-tuning models on domain-specific datasets helps improve performance on small datasets while reducing overfitting. This is particularly valuable in medical applications like diagnostic imaging or genomics, where variability in patient data can significantly impact the model's accuracy. For example, TL models in medical imaging, for example, those applied to brain tumor detection or skin cancer classification, have shown impressive results when fine-tuned from pretrained architectures such as VGG16, ResNet, and DenseNet [73][74]. Furthermore, TL models can be adapted to a broad array of tasks, including disease detection, prediction, and classification, across different medical domains, enhancing their versatility in clinical settings [75][76].
Despite its many strengths, TL in precision medicine also faces a number of constraints. One of the primary challenges is the risk of negative transfer, where the knowledge transferred from the source domain may not be relevant or beneficial for the target domain. This problem emerges when the differences between the source and target domains are too large, leading to suboptimal model effectiveness. As an example, algorithms developed on general medical imaging datasets may struggle to perform well when applied to highly specialized conditions or rare diseases with minimal data availability [77][78]. Additionally, TL models are often sensitive to domain shifts, such as variations in imaging equipment or demographic characteristics, which can compromise their ability to generalize effectively in real-world clinical settings. Another limitation is the dependency upon the standard of the pre-trained models along with the originating datasets. The success of TL heavily relies on the quality and representativeness of the dataset used for the model’s initial training. If the source data is biased, unbalanced, or unrepresentative of the target population, the transfer of knowledge may lead to poor model performance or biased predictions in the target domain. As an illustration, TL models built on data originating from a particular geographic location or patient group might fail to achieve satisfactory results when implemented on groups with distinct demographic traits or varying medical conditions [79][80]. Furthermore, while TL can diminish the dependence on vast quantities of annotated datasets, the task of refining models still demands substantial computing capacity and specialized knowledge, making it less accessible for some healthcare institutions, especially those with limited technical infrastructure [81].
The application of the concept of transfer learning (TL) in precision medicine has attracted notable momentum over the past few years, driven by the increasing need for high-quality models in medical domains that often suffer from limited labeled data. One of the prominent trends is the integration of TL with advanced deep learning frameworks, especially CNNs, in medical imaging and genomics. For instance, TL methods are being used to adapt models trained using extensive datasets like ImageNet or GoogleNet to specialized medical datasets, which enables high performance even with relatively small medical image datasets. This approach has been particularly beneficial in areas like cancer detection, skin lesion classification, and brain tumor detection, where acquiring large annotated datasets is costly and time-consuming. Additionally, fine-tuning pre-trained models has become a common practice, significantly improving model accuracy without requiring the exhaustive retraining of models from scratch. Another emerging trend is the increasing use of multi-modal transfer learning, where models leverage information originating from multiple sources, including medical imaging, clinical documentation, and genomic data. This hybrid approach allows for a more comprehensive analysis of patient data, integrating insights from multiple domains to generate a broader and more integrated perspective on patient health. For example, in the detection of diabetic retinopathy, researchers combine image data from retinal scans with patient health records to create more robust predictive models. The combination of clinical data and medical imaging, aided by TL, has the potential to provide more accurate disease predictions, as demonstrated in disorders like Alzheimer’s disease and Parkinson’s disease. Furthermore, the adoption of ensemble learning, where multiple pre-trained models are combined, is helping to enhance performance by leveraging the strengths of different architectures.
Despite these advancements, several research gaps persist. One significant challenge in TL for precision medicine is the issue of negative transfer, where the transmission of knowledge from the origin domain to the destination domain does not improve, or even degrades, model performance. This is especially problematic when the allocation of data between the source and destination domains is quite different, leading to suboptimal results. Another gap is related to issues of data bias and confidentiality, especially within the framework of sensitive healthcare information. There is an increasing need for techniques that mitigate biases in medical datasets, such as those related to race, gender, or geographical location, to guarantee that models achieve broad applicability and equity across varied populations. Additionally, while data privacy remains a critical concern, federated learning models, where data remains decentralized, are showing promise in addressing these issues while still allowing for collaborative learning across multiple institutions. Lastly, the complexity of integrating TL into clinical settings remains a substantial challenge. Despite the promise of TL in research, translating these models into real-world clinical applications requires addressing barriers such as the lack of standardized datasets, variability in healthcare systems, and the interpretability of AI models. Clinicians often require clear, understandable insights from models to make informed decisions, which calls for greater efforts in making AI models more explainable. As the field moves forward, the integration of explainable AI (XAI) with transfer learning techniques in precision medicine is an area that warrants significant research, particularly in ensuring that medical professionals can trust and interpret AI-based recommendations effectively.
The integration of Transfer Learning (TL) in precision medicine has shown considerable promise across various medical domains, offering an effective solution to address the challenges associated with restricted access to data and high computational costs. As demonstrated by the numerous studies presented in the table, TL has been effectively implemented across multiple fields, such as medical imaging, genomics, clinical data analysis, disease classification, and emotion recognition. The ability of TL models to leverage pre-trained networks and customize them for data tailored to particular tasks has led to improved model effectiveness in applications spanning from early disease detection to patient-specific predictions. Within the domain of medical imaging, TL has played a crucial role in improving the accuracy and efficiency of diagnostic systems, particularly in the detection of diseases like cancer, COVID-19, and brain tumors. By using models such as CNNs, ResNet, and DenseNet, TL has been able to improve feature extraction and decision-making processes in imaging modalities like X-rays, MRIs, and CT scans. These models, when fine-tuned with domain-specific data, have demonstrated significant performance gains, enabling more precise and rapid diagnosis even in resource-constrained environments. In genomics and clinical data analysis, TL has facilitated the extraction of meaningful patterns from complex datasets, such as DNA methylation and pediatric health data. Techniques like Bi-LSTM and Random Forest, integrated with TL approaches, have provided novel insights into disease mechanisms and improved predictive models for conditions like chronic kidney disease and Parkinson’s disease. The transferability of knowledge from general domains to specialized medical contexts has proven to be a game-changer, especially in rare diseases where data scarcity is a major challenge. Moreover, TL has made significant contributions to disease classification and emotion recognition. In the domain of chronic disease prediction and classification, TL models have achieved high accuracy in predicting the onset and progression of diseases like lung cancer and diabetes. Similarly, in emotion recognition, TL has been employed to analyze complex datasets, such as driving simulation data, to understand human behavior better and predict outcomes in real-world scenarios.
DECLARATION
Author Contribution
All authors contributed equally to the main contributor to this paper. All authors read and approved the final paper.
Acknowledgement
The authors would like to acknowledge the Department of Medical Technology, Institut Teknologi Sepuluh Nopember, for the facilities and support in this research. The authors also gratefully acknowledge financial support from the Institut Teknologi Sepuluh Nopember for this work, under project scheme of the Publication Writing and IPR Incentive Program (PPHKI) 2025.
Conflicts of Interest
The authors declare no conflict of interest.
REFERENCES
Yuri Pamungkas (Transfer Learning Models for Precision Medicine: A Review of Current Applications)