TY - EJOU AU - Amin, Muhammad Shahid AU - Shah, Jamal Hussain AU - Yasmin, Mussarat AU - Ansari, Ghulam Jillani AU - Khan, Muhamamd Attique AU - Tariq, Usman AU - Kim, Ye Jin AU - Chang, Byoungchol TI - A Two Stream Fusion Assisted Deep Learning Framework for Stomach Diseases Classification T2 - Computers, Materials \& Continua PY - 2022 VL - 73 IS - 2 SN - 1546-2226 AB - Due to rapid development in Artificial Intelligence (AI) and Deep Learning (DL), it is difficult to maintain the security and robustness of these techniques and algorithms due to emergence of novel term adversary sampling. Such technique is sensitive to these models. Thus, fake samples cause AI and DL model to produce diverse results. Adversarial attacks that successfully implemented in real world scenarios highlight their applicability even further. In this regard, minor modifications of input images cause “Adversarial Attacks” that altered the performance of competing attacks dramatically. Recently, such attacks and defensive strategies are gaining lot of attention by the machine learning and security researchers. Doctors use different kinds of technologies to examine the patient abnormalities including Wireless Capsule Endoscopy (WCE). However, using WCE it is very difficult for doctors to detect an abnormality within images since it takes enough time while inspection and deciding abnormality. As a result, it took weeks to generate patients test report, which is tiring and strenuous for them. Therefore, researchers come out with the solution to adopt computerized technologies, which are more suitable for the classification and detection of such abnormalities. As far as the classification is concern, the adversarial attacks generate problems in classified images. Now days, to handle this issue machine learning is mainstream defensive approach against adversarial attacks. Hence, this research exposes the attacks by altering the datasets with noise including salt and pepper and Fast Gradient Sign Method (FGSM) and then reflects that how machine learning algorithms work fine to handle these noises in order to avoid attacks. Results obtained on the WCE images which are vulnerable to adversarial attack are 96.30% accurate and prove that the proposed defensive model is robust when compared to competitive existing methods. KW - WCE images; adversarial attacks; FGSM noise; salt and pepper noise; feature fusion; deep learning DO - 10.32604/cmc.2022.030432