Performance Analysis of Learning for Gastric Cancer Prediction from Endoscopic Images

Main Article Content

Dr T Velumani

Abstract

In terms of mortality rates, gastric cancer is second only to lung cancer. Manual gastric slice pathology examination is labor-intensive and prone to observer bias. Endoscopy of the upper digestive tract is commonly used for the screening of gastric cancer. An object identification model, a kind of deep learning, was presented as a means of automating the diagnosis of early stomach cancer using endoscopic pictures. However, difficulties were encountered while attempting to reduce the sum of false positives in the identified findings. Tumour segmentation from the preprocessed pictures was carried out in this study, which is often more challenging and crucial. The research suggests a productive approach that makes use of multi-scale parallel convolution blocks (MPCs). Multi-scale parallel convolutions (MPCs) use filters of variable sizes to extract characteristics that are relevant across a range of tumour sizes. To further aid feature extraction with fewer parameters, residual connections and residual blocks can be used. In addition, the suggested study uses an Artificial plant optimisation algorithm (APOA) to fine-tune the segmentation model's parameters without resorting to post-processing approaches. Finally, gastric cancer detection from endoscopic pictures is accomplished using a hybrid classification strategy that incorporates Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN). Experiments employing 1208 photos from healthy people and 533 photographs from patients with stomach cancer examined detection performance using the 5-fold cross-validation approach. These findings show promise for the suggested method's application in automated early stomach cancer diagnosis using endoscopic images.


 

Article Details

Section
Articles