ISSN: 2685-9572 Buletin Ilmiah Sarjana Teknik Elektro
Vol. 7, No. 3, September 2025, pp. 595-607
Hybrid Attention-Enhanced CNNs for Small Object Detection in Mammography, CT, and Fundus Imaging
Hewa Majeed Zangana 1, Marwan Omar 2, Shuai Li 3, Jamal N. Al-Karaki 4, Anik Vega Vitianingsih 5
1 Duhok Polytechnic University, Duhok, Iraq
2 Illinois Institute of Technology, USA
3 University of Oulu, Finland
4 Zayed University, Abu Dhabi, UAE
5 Universitas Dr. Soetomo, Indonesia
ARTICLE INFORMATION | ABSTRACT | |
Article History: Received 18 June 2025 Revised 18 September 2025 Accepted 04 October 2025 | Early detection of subtle pathological features in medical images is critical for improving patient outcomes but remains challenging due to low contrast, small lesion size, and limited annotated data. The research contribution is a hybrid attention-enhanced CNN specifically tailored for small object detection across mammography, CT, and retinal fundus images. Our method integrates a ResNet-50 backbone with a modified Feature Pyramid Network, dilated convolutions for contextual scale expansion, and combined channel–spatial attention modules to preserve and amplify fine-grained features. We evaluate the model on public benchmarks (DDSM, LUNA16, IDRiD) using standardized preprocessing, extensive augmentation, and cross-validated training. Results show consistent gains in detection and localization: ECNN achieves an F1-score of 88.2% (95% CI: 87.4–89.0), mAP@0.5 of 86.8%, IoU of 78.6%, and a low false positives per image (FPPI = 0.12) versus baseline detectors. Ablation studies confirm the individual contributions of dilated convolutions, attention modules, and multi-scale fusion. However, these gains involve higher computational costs (≈2× training time and increased memory footprint), and limited dataset diversity suggests caution regarding generalizability. In conclusion, the proposed ECNN advances small-object sensitivity for early disease screening while highlighting the need for broader clinical validation and interpretability tools before deployment. | |
Keywords: Hybrid CNN; Multi-Scale Feature Fusion; Dilated Convolutions; Attention Mechanisms; Computational Efficiency; Dataset Bias; Early Disease Screening | ||
Corresponding Author: Hewa Majeed Zangana, Duhok Polytechnic University, Duhok, Iraq, Email: hewa.zangana@dpu.edu.krd | ||
This work is open access under a Creative Commons Attribution-Share Alike 4.0 | ||
Document Citation: H. M. Zangana, M. Omar, S. Li, J. N. Al-Karaki, and A. V. Vitianingsih, “Hybrid Attention-Enhanced CNNs for Small Object Detection in Mammography, CT, and Fundus Imaging,” Buletin Ilmiah Sarjana Teknik Elektro, vol. 7, no. 3, pp. 595-607, 2025, DOI: 10.12928/biste.v7i3.14015. | ||
In recent years, medical imaging has evolved into a cornerstone of diagnostic medicine, aiding in the early detection, diagnosis, and monitoring of a wide range of diseases. Techniques such as mammography, CT scans, and retinal fundus imaging have significantly improved diagnostic precision. However, one of the persistent challenges in this domain lies in the detection of small objects—minute pathological features such as microcalcifications, early-stage tumors, or subtle lesions that are often indicative of early disease progression. These small-scale abnormalities are frequently overshadowed by anatomical structures, suffer from low signal-to-noise ratios, and often exhibit significant variations across patients and imaging modalities [1]-[3].
Despite significant advances in computer vision, mainstream object detection algorithms often struggle with small object detection, particularly in the context of medical imaging. Models like Faster R-CNN, YOLOv3, and SSD, while effective for medium-to-large scale object detection, exhibit reduced sensitivity when applied to subtle pathological features [4]-[6]. This performance gap is primarily due to limitations in feature representation, down-sampling during convolution, and insufficient focus on fine-grained details [1], [7]. Furthermore, the computational burden of enhancing sensitivity poses an additional challenge for deployment in real-time or resource-constrained clinical environments [8][9]. This research aims to address the limitations of conventional CNN-based models in detecting small-scale anomalies in medical images by proposing an enhanced CNN architecture tailored for early disease screening. The key objectives and contributions of this work are as follows:
Despite these advances, existing approaches suffer from several limitations that hinder their generalizability. Publicly available datasets are often biased, overrepresenting specific pathologies or imaging conditions, which may lead to optimistic but non-transferable performance. Moreover, differences between modalities—such as mammography versus retinal scans—make it difficult for general-purpose architectures to maintain consistent accuracy across domains. Techniques like attention mechanisms and multi-scale fusion, while powerful, introduce higher memory consumption and increase the risk of overfitting, particularly when training on small datasets. Recent alternatives such as transformer-based models and self-supervised learning frameworks attempt to address these issues, but they often demand substantially greater computational resources. By contrast, our hybrid design seeks to balance representational power with efficiency, though we acknowledge that such trade-offs warrant careful evaluation.
The research contribution is an integrated hybrid architecture — a ResNet-50 + modified FPN backbone augmented with dilated convolutions and combined channel–spatial attention — that improves small-object sensitivity across mammography, CT, and fundus imaging while balancing accuracy and computational cost. The novelty of the proposed method lies in its integrated approach that specifically targets the multi-scale and low-contrast nature of small medical anomalies. Unlike prior methods that adapt general-purpose object detectors to medical data, our approach is purpose-built with three major enhancements: (1) attention-guided feature extraction to highlight low-contrast anomalies, (2) contextual scale expansion using dilated convolutions for greater spatial context without resolution loss, and (3) multi-resolution integration to ensure that small objects remain represented throughout the CNN hierarchy [17]-[19]. Previous attempts at hybrid designs have improved sensitivity but often at the expense of efficiency or clinical interpretability. Our approach overcomes these shortcomings by explicitly balancing accuracy with computational efficiency, quantified in terms of FLOPs and inference speed.
The chosen datasets—DDSM, LUNA16, and IDRiD—were selected because they represent diverse imaging modalities (X-ray, CT, fundus), provide varied lesion sizes, and are publicly available with standardized annotations. This ensures cross-dataset generalizability and practical relevance. Moreover, the improved detection of subtle anomalies directly addresses clinical consequences such as reducing missed early diagnoses in breast cancer, lung cancer, and diabetic retinopathy screening [20]-[22]. We also acknowledge potential deployment challenges, including interpretability and regulatory approval, which will be discussed further in the conclusion. By bridging methodological rigor with clinical utility, our framework offers a scalable path toward real-world medical integration.
Object detection has become a foundational task in computer vision, enabling applications in autonomous driving, surveillance, healthcare, and robotics. Over time, deep learning, particularly Convolutional Neural Networks (CNNs), has significantly improved object detection accuracy and adaptability across domains [7],[17]. Yet, detecting small objects—especially in critical applications like medical imaging—remains a challenging endeavor due to scale variance, low resolution, and contextual ambiguity [1][2]. Traditional object detection pipelines transitioned from hand-crafted features (e.g., HOG, SIFT) to deep learning-based feature extractors, with landmark models such as R-CNN and its derivatives laying the groundwork for modern approaches [1],[18]. One-stage detectors like YOLO and SSD offer speed advantages, while two-stage detectors such as Faster R-CNN provide better localization performance, especially for complex and small regions [22][23].
The emergence of lightweight detectors has enabled deployment in real-time or resource-constrained environments. [8] reviewed efficient CNN-based models suitable for embedded platforms, while [12] introduced YOLO-LITE to support detection on non-GPU systems. Small object detection is inherently difficult due to the limited number of pixels representing target instances and their tendency to be overwhelmed by background noise. [1][2] emphasized that popular object detectors underperform on small objects, particularly when using aggressive down-sampling strategies in CNNs. [24] also noted that evaluation datasets like LASIESTA often fail to emphasize small object metrics, leading to an underestimation of model weaknesses. To mitigate this, researchers have focused on improving feature pyramid architectures, contextual integration, and resolution preservation. [5] proposed an improved YOLOv3 variant specifically optimized for fine-grained detection tasks. [13] explored how compression techniques affect small object accuracy, especially in video-based pipelines.
Recent literature has explored a broad array of CNN architectures tailored to different detection contexts. [9],[25] conducted comprehensive reviews of deep learning object detectors, noting that attention mechanisms, multi-scale feature fusion, and dilated convolutions can boost small object performance. [26] provided further taxonomies of CNN-based approaches, emphasizing structural innovations over traditional region proposal networks. [21] presented a comparative analysis of key detection algorithms, highlighting strengths and limitations in terms of accuracy, inference speed, and robustness across scales. [6],[19] extended this analysis to road object detection, noting the relevance of generalizability for real-time applications. [14] recently proposed a hybrid framework integrating template matching with Faster R-CNN to improve robustness, particularly in noisy and unstructured environments—an approach inspiring aspects of this current research.
Multidimensional detection has emerged as a strategy to improve spatial awareness. [27] reviewed 2D and 3D detection models and concluded that 3D methods provide contextual advantages in volumetric datasets like CT or MRI scans. [28] surveyed 3D detection for intelligent vehicles, revealing performance trade-offs between accuracy and real-time feasibility. In dynamic environments, [29] focused on object detection for video surveillance, suggesting that temporal context may benefit small object tracking. [30] introduced MmRotate, a benchmark for rotated object detection that further improves spatial precision—an important consideration in rotated medical imaging formats. Several studies have benchmarked object detectors under varying conditions. [31][32] provided overviews and performance breakdowns of classical and deep learning-based methods. [11],[33] presented improvements on SSD and YOLO architectures for real-time applications, while [34] utilized GPU acceleration for high-resolution video streams.
Performance evaluation also requires robust metrics. [15] surveyed evaluation strategies such as precision-recall curves, IoU thresholds, and F1-scores. [16] highlighted the role of uncertainty estimation in object detection, especially in safety-critical systems such as autonomous vehicles or clinical diagnostics. The literature reflects a growing recognition of the need for specialized solutions for small object detection. While deep learning has drastically improved general object detection performance, current models often trade sensitivity for speed or scale. The proposed study builds on recent advancements, including attention-enhanced CNNs, efficient lightweight detectors, and hybrid methodologies like those explored by [14], to deliver a solution that prioritizes early detection of small medical anomalies without sacrificing speed or precision.
This research proposes an enhanced Convolutional Neural Network (CNN) architecture specifically designed for small object detection in medical imaging. The method combines multi-scale feature fusion, attention mechanisms, and dilated convolutions to improve sensitivity to minute pathological features such as early-stage tumors, microcalcifications, and retinal lesions. The workflow is illustrated in Figure 1, which includes preprocessing, feature extraction, region proposal, and detection.
Figure 1. Detailed workflow of the proposed ECNN methodology — datasets, preprocessing, backbone with enhancements, multi-scale fusion, detection head, post-processing, and evaluation metrics
Before training, the medical images undergo standardized preprocessing steps:
(1) |
where is the original intensity value. This corrects a previous ambiguity where standardization (μ/σ) was mistakenly implied.
We note that histogram equalization may amplify noise in some cases (e.g., low-dose CT) and resizing may introduce interpolation artifacts. These risks were mitigated by modality-specific adaptations and cross-validation across datasets.
Our proposed architecture is based on a modified Feature Pyramid Network (FPN) backbone with the following enhancements:
Let Fl ∈ RH×W×C be the feature map at layer l. multi-scale fusion combines features from layers l1, l2, ..., ln as:
(2) |
where are learnable weights initialized uniformly and optimized during training, and
is bilinear interpolation to match spatial dimensions. Conflict resolution between scales is managed by normalizing feature maps prior to fusion, ensuring balanced contributions across shallow and deep layers.
Dilated (or atrous) convolutions allow expansion of the receptive field:
(3) |
where is the dilation rate. We used r={2,4,6}, balancing context aggregation with fine-grained detail preservation.
We provide here an explicit rationale for the major design decisions and guidance to help reproducibility and interpretability:
The detection head consists of two parallel branches:
The total loss is:
(4) |
The choice of was validated via sensitivity testing across {0.5,1.0,2.0}, with 1.0 providing the best balance between classification and localization accuracy. Similarly, 100 epochs were sufficient for convergence, as early stopping prevented overfitting.
To measure performance, we use:
This ensures comprehensive assessment for both classification and localization. In addition to mAP and F1-score, we also report false positives per image (FPPI), a clinically meaningful metric that reflects potential workflow burden on radiologists.
To ensure reproducibility, we provide framework and library details (PyTorch, CUDA, cuDNN) and explicitly describe data preprocessing, augmentation parameters, and model hyperparameters. Baseline models (Faster R-CNN, YOLOv5, EfficientDet) were selected because they represent two-stage, single-stage, and compound scaling paradigms, respectively. Additional comparisons with SSD and RetinaNet were explored but found less competitive in small object tasks.
To evaluate the performance of the proposed Enhanced Convolutional Neural Network (ECNN) architecture for small object detection in medical imaging, we conducted experiments on the NIH Chest X-ray dataset (112,000 images) and the BUSI Breast Ultrasound dataset (780 scans). The data were divided into 70% training, 10% validation, and 20% testing. Class imbalance was addressed using weighted loss functions proportional to class frequencies.
The experiments were conducted on two widely used datasets:
The proposed Enhanced CNN (ECNN) outperformed baseline detectors including Faster R-CNN, YOLOv5, and EfficientDet across multiple benchmarks. As shown in Table 1, ECNN achieved an F1-score of 88.2%, mAP@0.5 of 86.8%, and IoU of 78.6%. These results indicate improved sensitivity for detecting subtle lesions such as small nodules, cysts, and breast masses. Moreover, the model reduced false positives per image (FPPI = 0.12) compared to YOLOv5 (0.25) and EfficientDet (0.19), demonstrating practical clinical benefit in reducing unnecessary follow-up investigations. The improvements are statistically significant (p < 0.01) across five-fold cross-validation. The ECNN consistently delivered higher recall, which is critical for early screening where missed detections may delay diagnosis. We also evaluated false positives per image (FPPI), an important diagnostic metric. ECNN achieved 0.12 FPPI, compared to 0.25 for YOLOv5 and 0.19 for EfficientDet, demonstrating reduced burden on clinical workflows.
Table 1. Performance Comparison Across Different Models
Model | Precision (%) | Recall (%) | F1-Score (%) | mAP@0.5 (%) | IoU (%) |
CNN-Baseline | 74.2 | 69.8 | 71.9 | 68.5 | 63.2 |
Faster R-CNN | 81.3 | 78.5 | 79.9 | 77.2 | 70.1 |
YOLOv5 | 84.7 | 80.2 | 82.4 | 80.9 | 72.8 |
EfficientDet | 86.1 | 82.5 | 84.2 | 83.5 | 74.3 |
Proposed ECNN | 89.4 | 87.1 | 88.2 | 86.8 | 78.6 |
Our results align with and extend findings from previous studies. Zangana et al. (2024) introduced a hybrid Faster R-CNN with template matching that improved robustness but achieved a lower mAP (83.2%) compared to our ECNN (86.8%). Similarly, nnUNet-based medical segmentation models (Isensee et al., 2021) demonstrated strong lesion detection performance but required substantially greater computational resources and produced higher FPPI (0.22) than ECNN. Compared to attention-enhanced YOLO variants (Zhao & Li, 2020), ECNN’s integration of dilated convolutions and multi-scale fusion provided more stable improvements across modalities (mammography, CT, fundus).
While the NIH Chest X-ray (112,000 images) and BUSI (780 scans) datasets provided strong evaluation on lung nodules and breast lesions, they do not fully represent other clinically important small-object detection tasks such as microcalcifications in mammograms or retinal hemorrhages. To partially address this, we conducted supplementary validation on a subset of the IDRiD retinal dataset (516 images), where ECNN achieved an F1-score of 83.7%. Although lower than its performance on NIH and BUSI, this suggests moderate transferability but highlights the need for broader multimodal evaluation.
Figure 2 provides sample outputs where ECNN successfully detected subtle 3–4 mm nodules that YOLOv5 and Faster R-CNN missed. However, failure cases remain: ECNN struggled with sub-2 mm microcalcifications in dense mammograms, often misclassifying them as background noise. Additionally, in retinal fundus images with heavy illumination artifacts, false negatives increased. These findings emphasize the importance of continued refinement for robustness in noisy clinical environments. Qualitative results show that ECNN accurately localized 3–4 mm lung nodules missed by YOLOv5, while also delineating small malignant breast masses more precisely than EfficientDet. However, it occasionally failed on sub-2 mm calcifications, indicating limitations in extremely fine-grained detection.
Figure 2. Qualitative Comparison Visualization
Table 2 highlights incremental performance gains from each architectural enhancement. Importantly, we found that attention mechanisms occasionally increased complexity without significant benefit on simpler tasks (e.g., large, high-contrast lesions). This suggests potential diminishing returns in certain scenarios. The ablation study demonstrates that each architectural enhancement contributes incrementally to performance. Dilated convolutions improved context aggregation, CBAM attention enhanced fine detail sensitivity, and multi-scale fusion stabilized small-object representation. The full ECNN achieved the strongest balance, but at increased computational cost.
Table 2. Ablation Study Results
Configuration | F1-Score (%) | mAP@0.5 (%) |
Baseline CNN | 71.9 | 68.5 |
+ Dilated Convolutions | 75.6 | 72.1 |
+ Attention Module (CBAM) | 80.4 | 77.3 |
+ Feature Pyramid Network (FPN) | 84.6 | 82.1 |
+ Multi-scale Supervision (Full ECNN) | 88.2 | 86.8 |
To better capture efficiency trade-offs, we quantified computational costs:
These results clarify that while ECNN improves detection accuracy, it introduces higher computational demands, which may pose challenges in resource-limited settings. To further illustrate the performance gains, Figure 3 shows the model progression in F1-score and mAP as each enhancement is added during ablation study. The progression of F1-score and mAP across ablation configurations illustrates a clear upward trajectory, validating the contribution of each module. This evidence supports our claim that the integrated hybrid design provides cumulative benefits.
Figure 3. F1-score and mAP progression across ECNN architecture components during ablation study
The superior recall and reduced FPPI suggest that ECNN could meaningfully lower the risk of missed early diagnoses while reducing unnecessary alarms in clinical workflows. By quantifying computational efficiency (≈2× training time, 1.5× memory vs. YOLOv5), our findings provide realistic expectations for deployment in research hospitals with adequate GPU resources. Importantly, the comparative analysis against prior works demonstrates that carefully combining lightweight modules yields performance gains without the excessive overhead of transformer-based or 3D CNN approaches.
Our findings affirm that ECNN significantly improves sensitivity to small lesions while reducing false positives, but these benefits come with trade-offs in efficiency and memory requirements. Baseline CNNs lost ~30% of sub-5 mm features due to downsampling, whereas ECNN preserved them via dilated convolutions and multi-scale fusion. However, broader clinical validation is required to confirm generalizability beyond NIH, BUSI, and IDRiD. A critical barrier to adoption remains interpretability, as clinicians must understand the model’s decisions to trust automated outputs. Moreover, potential dataset bias—arising from overrepresented conditions in public datasets—could limit real-world reliability. Addressing these challenges will require explainable AI mechanisms, bias mitigation strategies, and collaborations with radiologists to ensure clinically aligned benchmarks. Training required ~2× longer than YOLOv5 (48 hours vs. 24 hours on RTX A6000), but inference remained clinically feasible (<30 ms/image). In deployment scenarios, challenges include interpretability for clinicians and integration with DICOM-based PACS systems. Generalizability remains promising: when tested on a small subset of brain MRI scans (private dataset, n=200), ECNN retained a recall of 82%, though accuracy dropped due to modality differences — highlighting the need for multimodal adaptation. Future extensions include incorporating 3D volumetric context for CT/MRI and improving explainability to support radiologist trust.
This paper presented a hybrid attention-enhanced CNN (ECNN) that integrates multi-scale feature fusion, dilated convolutions, and combined channel–spatial attention mechanisms to improve small-object detection in medical imaging. The model demonstrated statistically significant improvements in detection accuracy (F1 = 88.2%, mAP@0.5 = 86.8%), localization (IoU = 78.6%), and efficiency (FPPI = 0.12) compared to state-of-the-art baselines. This research contributes to the theory of hybrid architectures by showing that carefully balanced combinations of lightweight modules (dilated convolutions, attention, fusion) can achieve comparable or superior performance to heavier alternatives (e.g., transformers, 3D CNNs) while retaining practical efficiency. The study highlights that reduced false positives and improved recall directly translate into meaningful clinical benefits for early screening of lung nodules, breast lesions, and retinal abnormalities. The study is constrained by dataset diversity, increased computational requirements, and limited interpretability mechanisms. These issues caution against immediate clinical deployment. Future work can include to Extend validation to diverse modalities (MRI, PET, low-resolution mammograms). Also, to Develop explainable AI modules (e.g., saliency maps, uncertainty estimates) to aid clinical interpretability. As well to Explore domain adaptation and self-supervised pretraining to mitigate dataset bias. Also, to Optimize ECNN for edge devices, targeting <1s inference for real-time deployment. And to Investigate transformer-CNN hybrids for integrating long-range dependencies without prohibitive resource demands. This study provides evidence that hybrid lightweight CNN architectures can bridge the gap between accuracy and efficiency in small-object medical detection, paving the way for clinically viable AI-assisted early diagnosis systems.
DECLARATION
Supplementary Materials
No supplementary materials are available for this study.
Author Contribution
All authors contributed equally to the main contributor to this paper. All authors read and approved the final paper.
Funding
This research received no external funding.
Conflicts of Interest
The authors declare no conflict of interest.
REFERENCES
AUTHOR BIOGRAPHY
Hewa Majeed Zangana, Hewa Majeed Zangana is an Assistant Professor at Duhok Polytechnic University (DPU), Iraq, and a current PhD candidate in Information Technology Management (ITM) at the same institution. He has held numerous academic and administrative positions, including Assistant Professor at Ararat Private Technical Institute, Lecturer at DPU’s Amedi Technical Institute and Nawroz University, and Acting Dean of the College of Computer and IT at Nawroz University. His administrative roles have included Director of the Curriculum Division at the Presidency of DPU, Manager of the Information Unit at DPU’s Research Center, and Head of the Computer Science Department at Nawroz University. Dr. Zangana's research interests include network systems, information security, mobile and data communication, and intelligent systems. He has authored numerous articles in peer-reviewed journals, including Inform: Jurnal Ilmiah Bidang Teknologi Informasi dan Komunikasi, Indonesian Journal of Education and Social Science, TIJAB, INJIISCOM, IEEE, and AJNU. In addition to his journal contributions, he has published more than five academic books with IGI Global, several of which are indexed in Scopus and Web of Science (Clarivate). Beyond publishing, Dr. Zangana actively contributes to the academic community through editorial service. He serves as a reviewer for the Qubahan Academic Journal and the Scientific Journals of Nawroz University. He is also a member of several academic and scientific committees, including the Scientific Curriculum Development Committee, the Student Follow-up Program Committee, and the Committee for Drafting the Rules of Procedure for Consultative Offices., hewa.zangana@dpu.edu.krd , Researcher websites Scopus(https://www.scopus.com/authid/detail.uri?authorId=57203148210), |
Marwan Omar, Dr. Marwan Omar is an Associate Professor of Cybersecurity and Digital Forensics at the Illinois Institute of Technology. He holds a Doctorate in Computer Science specializing in Digital Systems Security from Colorado Technical University and a Post-Doctoral Degree in Cybersecurity from the University of Fernando Pessoa, Portugal. Dr. Omar's work focuses on cybersecurity, data analytics, machine learning, and AI in digital forensics. His extensive research portfolio includes numerous publications and over 598 citations. Known for his industry experience and dedication to teaching, he actively contributes to curriculum development, preparing future cybersecurity experts for emerging challenges. Google Scholar(https://scholar.google.com/citations?user=5T5iAZQAAAAJ&hl=en&oi=ao) |
Shuai Li, Dr. Shuai (Steven) Li is currently a Full Professor with Faculty of Information Technology and Electrical Engineering (ITEE), University of Oulu and also an Adjunct Professor with VTT-Technology Research Center of Finland. Steven's main research interests are nonlinear optimization and intelligent control with their applications to robotics. He has published over 200 SCI indexed journal papers (including more than 90 on IEEE transactions) on peer reviewed journals. Steven is available for supervising both master and PhD students. PhD graduates from his group are now working in leading universities in Hong Kong, India, China as professors. Steven is a Fellow of IET (Institution of Engineering and Technology), a Fellow of BCS (British Computer Society) and a Fellow of IMA (Institute of Mathematics and its Applications). In case you have interests for taking your further study under the supervision by Steven, please feel free to contact him at shuai.li@oulu.fi Please refer to his Google Scholar page to follow the latest research by Steven and his team https://scholar.google.com/citations?user=H8UOWqoAAAAJ&hl=en |
Jamal N. Al-Karaki, Jamal Al-Karaki is currently a full professor at CIS, Zayed university. He has a rich University career in education, service, and research. He holds a PhD degree (with Distinction) in Computer Engineering from the Iowa State University, USA, with Research Excellence Award. He has a proven record of progressive responsibility including leadership positions as a college Dean, Director of IT, Co-Founder and Dept. Head at various institutes. He led the development of some national centers in cyber security as well as undergraduate and graduate computing programs. As an active researcher, he develops plans to advance the research agenda at CIS. He Published 90+ refereed technical articles in scholarly international journals and conference proceedings. He is a senior member of IEEE and Tau Beta Pi. He is a member of Mohammad Bin Rashid Academy for Scientists in UAE and recently listed among the top 2% highly cited researchers in his field worldwide. He is also a certified reviewer for CAA and certified Pearson EDI verifier/assessor. Google Scholar (https://scholar.google.com/citations?user=OWtyyi0AAAAJ&hl=en) |
Anik Vega Vitianingsih, Master of Engineering and Lecturer at Universitas Dr. Soetomo, Indonesia Disciplines: Logic and Foundations of Mathematics, Information Systems (Business Informatics), Information Science Computing in Mathematics, Natural Science, Engineering and Medicine Google Scholar(https://scholar.google.co.id/citations?user=acUhDoIAAAAJ&hl=en) Email: vega@unitomo.ac.id |
Hewa Majeed Zangana (Hybrid Attention-Enhanced CNNs for Small Object Detection in Mammography, CT, and Fundus Imaging)