Analisis Visualisasi Feature Map Lapisan Konvolusi AlexNet untuk Klasifikasi Diabetic Retinopathy pada Citra Fundus Retina
DOI:
https://doi.org/10.62017/merdeka.v3i4.7325Keywords:
AlexNet, Convolutional Layer, Diabetic Retinopathy, Feature Map VisualizationAbstract
Diabetic Retinopathy (DR) is a diabetes complication and the leading cause of blindness in the working-age population. Convolutional Neural Network (CNN)-based automated detection systems have proven effective in classifying DR; however, model interpretability remains a challenge in clinical deployment. This study presents an in-depth analysis of feature map visualizations across five convolutional layers of the AlexNet architecture (Conv. 1 to Conv. 5), trained on the APTOS 2019 dataset for binary classification of Diabetic Retinopathy versus Non-Diabetic Retinopathy. Observations were conducted comparatively between DR and Non-DR fundus retinal images to understand how each convolutional layer extracts and transforms feature representations. Results indicate that early layers (Conv. 1–2) extract low-level features such as edges, orientations, and basic textures, while deeper layers (Conv. 3–5) build increasingly abstract and discriminative semantic representations. Significant differences in activation patterns between DR and Non-DR images are identifiable from Conv. 3 onward, becoming more defined in Conv. 4–5, confirming AlexNet's ability to hierarchically extract retinal pathological features. This study contributes to the explainability of deep learning models for medical applications, specifically providing a visual interpretive basis that supports clinician confidence in CNN-based CAD systems.










