To Örebro University

oru.seÖrebro University Publications
Change search
Link to record
Permanent link

Direct link
Rahaman, G. M. Atiqur, Dr.ORCID iD iconorcid.org/0000-0001-7387-6650
Alternative names
Publications (10 of 13) Show all publications
Jamil, M. S., Banik, S. P., Rahaman, G. M. & Saha, S. (2023). Advanced GradCAM++: Improved Visual Explanations of CNN Decisions in Diabetic Retinopathy. In: Nazmul Siddique; Mohammad Shamsul Arefin; Md Atiqur Rahman Ahad; M. Ali Akber Dewan (Ed.), Computer Vision and Image Analysis for Industry 4.0: (pp. 64-75). New York: Taylor & Francis Group
Open this publication in new window or tab >>Advanced GradCAM++: Improved Visual Explanations of CNN Decisions in Diabetic Retinopathy
2023 (English)In: Computer Vision and Image Analysis for Industry 4.0 / [ed] Nazmul Siddique; Mohammad Shamsul Arefin; Md Atiqur Rahman Ahad; M. Ali Akber Dewan, New York: Taylor & Francis Group, 2023, p. 64-75Chapter in book (Refereed)
Abstract [en]

Convolutional neural network (CNN)-based methods have achieved state-of-the-art performance in solving several complex computer vision problems including assessment of diabetic retinopathy (DR). Despite this, CNN-based methods are often criticized as “black box” methods for providing limited to no understanding about their internal functioning. In recent years there has been an increased interest to develop explainable deep learning models, and this paper is an effort in that direction in the context of DR. Based on one of the best performing method called Grad-CAM++, we propose Advanced Grad-CAM++ to provide further improvement in visual explanations of CNN model predictions (when compared to Grad-CAM++), in terms of better localization of DR pathology as well as explaining occurrences of multiple DR pathology types in a fundus image. By keeping all the layers and operations as is, the proposed method adds an additional non-learnable bilateral convolutional layer between the input image and the very first learnable convolutional layer of Grad-CAM++. Experiments were conducted on fundus images collected from publicly available sources namely EyePACS and DIARETDB1. Intersection over Union (IoU) score between the ground truth and heatmap produced by the methods were used to quantitatively compare the performance.The overall IoU score for Advanced Grad-CAM++ is 0.179, whereas for Grad-CAM++ it is score 0.161. Thus an 11.18% improvement in agreement with the ground truths by the proposed method is inferable.

Place, publisher, year, edition, pages
New York: Taylor & Francis Group, 2023
Keywords
Deep Learning, Interpretable ML, CNN, Diabetic Retinopathy, Optic Disc
National Category
Computer graphics and computer vision
Research subject
Signal Processing; Computerized Image Analysis
Identifiers
urn:nbn:se:oru:diva-106530 (URN)2-s2.0-85161149678 (Scopus ID)9781003256106 (ISBN)9781032164168 (ISBN)9781032187624 (ISBN)
Available from: 2023-06-22 Created: 2023-06-22 Last updated: 2025-02-07Bibliographically approved
Protik, P., Rahaman, G. M. & Sajib, S. (2023). Automated Detection of Diabetic Foot Ulcer Using Convolutional Neural Network. In: Md. Sazzad Hossain; Satya Prasad Majumder; Nazmul Siddique; Md. Shahadat Hossain (Ed.), The Fourth Industrial Revolution and Beyond: Select Proceedings of IC4IR+ (pp. 565-576). Singapore: Springer Nature
Open this publication in new window or tab >>Automated Detection of Diabetic Foot Ulcer Using Convolutional Neural Network
2023 (English)In: The Fourth Industrial Revolution and Beyond: Select Proceedings of IC4IR+ / [ed] Md. Sazzad Hossain; Satya Prasad Majumder; Nazmul Siddique; Md. Shahadat Hossain, Singapore: Springer Nature, 2023, p. 565-576Chapter in book (Refereed)
Abstract [en]

Diabetic foot ulcers (DFU) are one of the major health complications for people with diabetes. It may cause limb amputation or lead to life-threatening situations if not detected and treated properly at an early stage. A diabetic patient has a 15–25% chance of developing DFU at a later stage in his or her life if proper foot care is not taken. Because of these high-risk factors, patients with diabetes need to have regular checkups and medications which cause a huge financial burden on both the patients and their families. Hence, the necessity of a cost-effective, re-mote, and fitting DFU diagnosis technique is imminent. This paper presents a convolutional neural network (CNN)-based approach for the automated detection of diabetic foot ulcers from the pictures of a patient’s feet. ResNet50 is used as the backbone of the Faster R-CNN which performed better than the original Faster R-CNN that uses VGG16. A total of 2000 images from the Diabetic Foot Ulcer Grand Challenge 2020 (DFUC2020) dataset have been used for the experiment. The proposed method obtained precision, recall, F1-score, and mean average precision of 77.3%, 89.0%, 82.7%, and 71.3%, respectively, in DFU detection which is better than results obtained by the original Faster R-CNN.

Place, publisher, year, edition, pages
Singapore: Springer Nature, 2023
Series
Lecture Notes in Electrical Engineering, ISSN 1876-1100, E-ISSN 1876-1119 ; 980
Keywords
Diabetic foot ulcer, Object detection, Convolutional neural network, Deep learning, Faster R-CNN
National Category
Computer graphics and computer vision
Research subject
Computerized Image Analysis
Identifiers
urn:nbn:se:oru:diva-106531 (URN)10.1007/978-981-19-8032-9_40 (DOI)9789811980312 (ISBN)9789811980343 (ISBN)9789811980329 (ISBN)
Available from: 2023-06-22 Created: 2023-06-22 Last updated: 2025-02-07Bibliographically approved
Rahaman, G. M., Längkvist, M. & Loutfi, A. (2022). Deep Learning based Aerial Image Segmentation for Computing Green Area Factor. In: 2022 10th European Workshop on Visual Information Processing (EUVIP): . Paper presented at 10th European Workshop on Visual Information Processing (EUVIP), Lisbon, Portugal, September 11-14, 2022. IEEE
Open this publication in new window or tab >>Deep Learning based Aerial Image Segmentation for Computing Green Area Factor
2022 (English)In: 2022 10th European Workshop on Visual Information Processing (EUVIP), IEEE, 2022Conference paper, Published paper (Refereed)
Abstract [en]

The Green Area Factor(GYF) is an aggregate norm used as an index to quantify how much eco-efficient surface exists in a given area. Although the GYF is a single number, it expresses several different contributions of natural objects to the ecosystem. It is used as a planning tool to create and manage attractive urban environments ensuring the existence of required green/blue elements. Currently, the GYF model is gaining rapid attraction by different communities. However, calculating the GYF value is challenging as significant amount of manual effort is needed. In this study, we present a novel approach for automatic extraction of the GYF value from aerial imagery using semantic segmentation results. For model training and validation a set of RGB images captured by Drone imaging system is used. Each image is annotated into trees, grass, soil/open surface, building, and road. A modified U-net deep learning architecture is used for the segmentation of various objects by classifying each pixel into one of the semantic classes. From the segmented image we calculate the class-wise fractional area coverages that are used as input into the simplified GYF model called Sundbyberg for calculating the GYF value. Experimental results yield that the deep learning method provides about 92% mean IoU for test image segmentation and corresponding GYF value is 0.34.

Place, publisher, year, edition, pages
IEEE, 2022
Series
European Workshop on Visual Information Processing, ISSN 2471-8963, E-ISSN 2164-974X
Keywords
green Area index, deep learning, CNN, image, segmentation, urban planning, semantic classification
National Category
Computer graphics and computer vision
Research subject
Computerized Image Analysis
Identifiers
urn:nbn:se:oru:diva-102544 (URN)10.1109/EUVIP53989.2022.9922743 (DOI)000886233300019 ()2-s2.0-85141101986 (Scopus ID)9781665466233 (ISBN)9781665466240 (ISBN)
Conference
10th European Workshop on Visual Information Processing (EUVIP), Lisbon, Portugal, September 11-14, 2022
Available from: 2022-12-05 Created: 2022-12-05 Last updated: 2025-02-07Bibliographically approved
Pal, S. & Rahaman, G. M. (2022). Image Forgery Detection Using CNN and Local Binary Pattern-Based Patch Descriptor. In: Satyabrata Roy; Deepak Sinwar; Thinagaran Perumal; Adam Slowik; João Manuel R. S. Tavares (Ed.), Innovations in Computational Intelligence and Computer Vision: Proceedings of ICICV 2021 (pp. 429-439). Springer
Open this publication in new window or tab >>Image Forgery Detection Using CNN and Local Binary Pattern-Based Patch Descriptor
2022 (English)In: Innovations in Computational Intelligence and Computer Vision: Proceedings of ICICV 2021 / [ed] Satyabrata Roy; Deepak Sinwar; Thinagaran Perumal; Adam Slowik; João Manuel R. S. Tavares, Springer, 2022, p. 429-439Chapter in book (Refereed)
Abstract [en]

This paper aims to propose a novel method to detect multiple types of image forgery. The method uses Local Binary Pattern (LBP) as a descriptive feature of the image patches. A uniquely designed convolutional neural network (LBPNet) is proposed where four VGG style blocks are used followed by a support vector machine (SVM) classifier. It uses ‘Swish’ activation function, ‘Adam’ optimizing function, a combination of ‘Binary Cross-Entropy’ and ‘Squared Hinge’ as the loss functions. The proposed method is trained and tested on 111,350 image patches generated from phase-I of IEEE IFS-TC Image Forensics Challenge dataset. Once trained, the results reveal that training such network with computed LBP patches of real and forged image can produce 98.96% validation and 98.84% testing accuracy with area under the curve (AUC) score of 0.988. The experimental result proves the efficacy of the proposed method with respect to the most state-of-the-art techniques.

Place, publisher, year, edition, pages
Springer, 2022
Series
Advances in Intelligent Systems and Computing ; 1424
Keywords
Image forgery, Convolutional neural network (CNN), Local binary pattern (LBP), LBPNet
National Category
Computer graphics and computer vision
Research subject
Computer Science; Signal Processing
Identifiers
urn:nbn:se:oru:diva-99133 (URN)10.1007/978-981-19-0475-2_38 (DOI)9789811904745 (ISBN)9789811904752 (ISBN)
Available from: 2022-05-23 Created: 2022-05-23 Last updated: 2025-02-07Bibliographically approved
Rana, M. M. M., Hasnat, A. & Rahaman, G. M. (2022). SMIFD-1000: Social media image forgery detection database. Forensic Science International: Digital Investigation, 41, Article ID 301392.
Open this publication in new window or tab >>SMIFD-1000: Social media image forgery detection database
2022 (English)In: Forensic Science International: Digital Investigation, ISSN 2666-2825, Vol. 41, article id 301392Article in journal (Refereed) Published
Abstract [en]

Image forgery/manipulation is one of the most alarming topics and becomes a major concern about different social media platforms regarding one’s privacy and safety. Therefore, the detection of the manipulated images is of immense interest to the researchers in the recent years. Despite the availability of numerous image forgery detection (IFD) datasets, very few particularly address the actual challenge by collecting the manipulated images from real-world scenario, e.g., collection of images from social media. Consequently, the contextual knowledge behind using the manipulated images remains unachieved. In order to address these issues, we propose an indigenous social media image forgery detection database, naming SMIFD-1000. This dataset provides rich annotations from several aspects: (a) image level: image regions that helps to classify pixel-level information; (b) forgery type: provide rich information about manipulation and (c) target and motif of manipulations: provide contextual rich knowledge about manipulation, which is significantly important from the perspective of social science. Finally, we would examine and benchmark the effectiveness of several publicly available algorithms on this dataset to demonstrate its usefulness. Results show that the dataset is highly challenging and will serve as an important benchmark for the existing and future IFD algorithms. 

Place, publisher, year, edition, pages
Elsevier, 2022
Keywords
Image Manipulation, Digital Forensics, Image Dataset
National Category
Computer graphics and computer vision
Research subject
Computer Science
Identifiers
urn:nbn:se:oru:diva-99414 (URN)10.1016/j.fsidi.2022.301392 (DOI)000836452600002 ()2-s2.0-85131420593 (Scopus ID)
Note

Funding agencies:

Information and Communication Technology (ICT) Division, (Ministry of Post, Telecommunication, and Information Technology) Government of the People's Republic of Bangladesh 56.00.0000.028.33.093.19-431

Blackbird.AI

Available from: 2022-06-03 Created: 2022-06-03 Last updated: 2025-02-07Bibliographically approved
Saha, S., Rahaman, G. M., Islam, T., Akter, M., Frost, S. & Kanagasingam, Y. (2021). Retinal image registration using log-polar transform and robust description of bifurcation points. Biomedical Signal Processing and Control, 66, Article ID 102424.
Open this publication in new window or tab >>Retinal image registration using log-polar transform and robust description of bifurcation points
Show others...
2021 (English)In: Biomedical Signal Processing and Control, ISSN 1746-8094, E-ISSN 1746-8108, Vol. 66, article id 102424Article in journal (Refereed) Published
Abstract [en]

Registration of retinal image is a crucial and fundamental step in several medical diagnoses. In this paper we propose an innovative method for retinal image registration. The method applies log-polar transform to approximate the difference in scale and orientation among images. A novel descriptor named Combined Local Haar of Bifurcation points (CLHB) is proposed for robust description and precise matching of retinal bifurcation and cross-over points. Experiments are performed on retinal image registration datasets collected from private and public sources and consisting of a total of 484 fundus photographs (i.e. 242 pairs). The proposed method has been compared with the state-of-the-art Generalized Dual-Bootstrap Iterative Closest Point (GDP ICP), Hernandez-Matas et al., Saha et al., and Chen et al.’s methods and has been found to outperform them with a clear margin. On the publicly available FIRE dataset, our proposed method is found 2% more accurate than the best performing Saha et al.’s method. On the private dataset the method is found to be about 3% more accurate than the best performing method.

Place, publisher, year, edition, pages
Elsevier, 2021
Keywords
Image registration, Color fundus photographs, Local feature descriptor, Log-polar transform, Retinal image registration
National Category
Computer graphics and computer vision
Research subject
Computerized Image Analysis; Computer Science
Identifiers
urn:nbn:se:oru:diva-96681 (URN)10.1016/j.bspc.2021.102424 (DOI)000636240200036 ()2-s2.0-85100124254 (Scopus ID)
Available from: 2022-01-25 Created: 2022-01-25 Last updated: 2025-02-07Bibliographically approved
Ahnaf, S. A., Rahaman, G. M. & Saha, S. (2021). Understanding CNN's Decision Making on OCT-based AMD Detection. In: 2021 International Conference on Electronics, Communications and Information Technology (ICECIT), 14-16 Sept. 2021: . Paper presented at 2021 International Conference on Electronics, Communications and Information Technology (ICECIT), Khulna, Bangladesh, September 14-16, 2021 (pp. 1-4). IEEE
Open this publication in new window or tab >>Understanding CNN's Decision Making on OCT-based AMD Detection
2021 (English)In: 2021 International Conference on Electronics, Communications and Information Technology (ICECIT), 14-16 Sept. 2021, IEEE, 2021, p. 1-4Conference paper, Published paper (Refereed)
Abstract [en]

Age-related Macular degeneration (AMD) is the third leading cause of incurable acute central vision loss. Optical coherence tomography (OCT) is a diagnostic process used for both AMD and diabetic macular edema (DME) detection. Spectral-domain OCT (SD-OCT), an improvement of traditional OCT, has revolutionized assessing AMD for its high acquiring rate, high efficiency, and resolution. To detect AMD from normal OCT scans many techniques have been adopted. Automatic detection of AMD has become popular recently. The use of a deep Convolutional Neural Network (CNN) has helped its cause vastly. Despite having achieved better performance, CNN models are often criticized for not giving any justification in decision-making. In this paper, we aim to visualize and critically analyze the decision of CNNs in context-based AMD detection. Multiple experiments were done using the DUKE OCT dataset, utilizing transfer learning in Resnet50 and Vgg16 model. After training the model for AMD detection, Gradient-weighted Class Activation Mapping (Grad-Cam) is used for feature visualization. With the feature mapped image, each layer mask was compared. We have found out that the Outer Nuclear layer to the Inner segment myeloid (ONL-ISM) has more predominance about 17.13% for normal and 6.64% for AMD in decision making.

Place, publisher, year, edition, pages
IEEE, 2021
Keywords
AMD, OCT, CNN, macula, retina, Grad-Cam, visualization
National Category
Computer graphics and computer vision Computer Sciences
Research subject
Computerized Image Analysis; Computer Science
Identifiers
urn:nbn:se:oru:diva-96707 (URN)10.1109/ICECIT54077.2021.9641246 (DOI)000855845700044 ()9781665423632 (ISBN)9781665423649 (ISBN)
Conference
2021 International Conference on Electronics, Communications and Information Technology (ICECIT), Khulna, Bangladesh, September 14-16, 2021
Available from: 2022-01-26 Created: 2022-01-26 Last updated: 2025-02-01Bibliographically approved
Rahaman, G. M., Parkkinen, J. & Hauta-Kasari, M. (2020). A Novel Approach to Using Spectral Imaging to Classify Dyes in Colored Fibers. Sensors, 20(16), Article ID 4379.
Open this publication in new window or tab >>A Novel Approach to Using Spectral Imaging to Classify Dyes in Colored Fibers
2020 (English)In: Sensors, E-ISSN 1424-8220, Vol. 20, no 16, article id 4379Article in journal (Refereed) Published
Abstract [en]

In the field of cultural heritage, applied dyes on textiles are studied to explore their great artistic and historic values. Dye analysis is essential and important to plan correct restoration, preservation and display strategy in museums and art galleries. However, most of the existing diagnostic technologies are destructive to the historical objects. In contrast to that, spectral reflectance imaging is potential as a non-destructive and spatially resolved technique. There have been hardly any studies in classification of dyes in textile fibers using spectral imaging. In this study, we show that spectral imaging with machine learning technique is capable in preliminary screening of dyes into the natural or synthetic class. At first, sparse logistic regression algorithm is applied on reflectance data of dyed fibers to determine some discriminating bands. Then support vector machine algorithm (SVM) is applied for classification considering the reflectance of the selected spectral bands. The results show nine selected bands in short wave infrared region (SWIR, 1000–2500 nm) classify dyes with 97.4% accuracy (kappa 0.94). Interestingly, the results show that fairly accurate dye classification can be achieved using the bands at 1480nm, 1640 nm, and 2330 nm. This indicates possibilities to build an inexpensive handheld screening device for field studies.

Place, publisher, year, edition, pages
MDPI, 2020
Keywords
Spectral imaging, classification, logistic regression, cultural heritage, dyes, SVM
National Category
Computer graphics and computer vision Computer Sciences
Research subject
Computerized Image Analysis; Computer Science
Identifiers
urn:nbn:se:oru:diva-96709 (URN)10.3390/s20164379 (DOI)000564589400001 ()2-s2.0-85089212753 (Scopus ID)
Note

Funding agency:

Ministry of Posts, Telecommunication and Information Technology (ICT Division), Government of People's Republic of Bangladesh 56.00.0000.28.33.042.15-530

Available from: 2022-01-26 Created: 2022-01-26 Last updated: 2025-02-01Bibliographically approved
Sayed, M. A. A., Saha, S., Rahaman, G. M., Ghosh, T. K. & Kanagasingam, Y. (2020). An innovate approach for retinal blood vessel segmentation using mixture of supervised and unsupervised methods. IET Image Processing, 15(1), 180-190
Open this publication in new window or tab >>An innovate approach for retinal blood vessel segmentation using mixture of supervised and unsupervised methods
Show others...
2020 (English)In: IET Image Processing, ISSN 1751-9659, E-ISSN 1751-9667, Vol. 15, no 1, p. 180-190Article in journal (Refereed) Published
Abstract [en]

Segmentation of retinal blood vessels is a very important diagnostic procedure in ophthalmology. Segmenting blood vessels in the presence of pathological lesions is a majorchallenge. In this paper, an innovative approach to segment the retinal blood vessel in thepresence of pathology is proposed. The method combines both supervised and unsupervised approaches in the retinal imaging context. Two innovative descriptors named localHaar pattern and modified speeded up robust features are also proposed. Experiments areconducted on three publicly available datasets named: DRIVE, STARE and CHASE DB1,and the proposed method has been compared against the state-of-the-art methods. Theproposed method is found about 1% more accurate than the best performing supervisedmethod and 2% more accurate than the state-of-the-art Nguyen et al.’s method.

Place, publisher, year, edition, pages
Institution of Engineering and Technology (IET), 2020
National Category
Computer graphics and computer vision Computer Sciences
Research subject
Computerized Image Analysis; Computer Science
Identifiers
urn:nbn:se:oru:diva-96705 (URN)10.1049/ipr2.12018 (DOI)000599424400001 ()2-s2.0-85108909363 (Scopus ID)
Available from: 2022-01-26 Created: 2022-01-26 Last updated: 2025-02-01Bibliographically approved
Sayed, M. A. A., Saha, S., Rahaman, G. M., Ghosh, T. K. & Kanagasingam, Y. (2019). A Semi-supervised Approach to Segment Retinal Blood Vessels in Color Fundus Photographs. In: David Riaño, Szymon Wilk, Annette ten Teije (Ed.), David Riaño; Szymon Wilk; Annette ten Teije (Ed.), Artificial Intelligence in Medicine: 17th Conference on Artificial Intelligence in Medicine, AIME 2019, Poznan, Poland, June 26–29, 2019, Proceedings. Paper presented at 17th Conference on Artificial Intelligence in Medicine (AIME 2019), Poznan, Poland, June 26–29, 2019 (pp. 347-351). Springer
Open this publication in new window or tab >>A Semi-supervised Approach to Segment Retinal Blood Vessels in Color Fundus Photographs
Show others...
2019 (English)In: Artificial Intelligence in Medicine: 17th Conference on Artificial Intelligence in Medicine, AIME 2019, Poznan, Poland, June 26–29, 2019, Proceedings / [ed] David Riaño; Szymon Wilk; Annette ten Teije, Springer, 2019, p. 347-351Conference paper, Published paper (Refereed)
Abstract [en]

Segmentation of retinal blood vessels is an important diagnostic procedure in ophthalmology. In this paper we propose an automated blood vessels segmentation method that combines both supervised and un-supervised approaches. A novel descriptor named Local Haar Pattern (LHP) is proposed to describe retinal pixel of interest. The performance of the proposed method has been evaluated on three publicly available DRIVE, STARE and CHASE_DB1 datasets. The proposed method achieves an overall segmentation accuracy of 96%, 96% and 95% respectively on DRIVE, STARE, and CHASE DB1 datasets, which are better than the state-of-the-art methods.

Place, publisher, year, edition, pages
Springer, 2019
Series
Lecture Notes in Computer Science, ISSN 0302-9743, E-ISSN 1611-3349 ; 11526
Keywords
Color fundus photographs, Vessel segmentation, Haar feature, Multiscale line detector, Random forest
National Category
Computer graphics and computer vision Computer Sciences
Research subject
Computerized Image Analysis; Computer Science
Identifiers
urn:nbn:se:oru:diva-96711 (URN)10.1007/978-3-030-21642-9_44 (DOI)000495606500044 ()9783030216412 (ISBN)9783030216429 (ISBN)
Conference
17th Conference on Artificial Intelligence in Medicine (AIME 2019), Poznan, Poland, June 26–29, 2019
Available from: 2022-01-26 Created: 2022-01-26 Last updated: 2025-02-01Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-7387-6650

Search in DiVA

Show all publications