https://doi.org/10.31449/inf.v47i1.4433 Informatica 47 (2023) 115–130 115 A Novel Method for Human MRI Based Pancreatic Cancer Prediction Using Integration of Harris Hawks Varients & VGG16: A Deep Learning Approach Rama Prakasha Reddy Chegireddy1*, A SriNagesh 2 1 Research Scholar, Department of Computer Science and Engineering, D.r. Y.S.R ANU College of Engineering and Technology, Acharya Nagarjuna University, Guntur-Andhra Pradesh, India. 2 Professor, Department of Computer Science and Engineering, RVR&JC College of Engineering, Guntur, Andhra Pradesh, India. E-mail: reddysinfo@gmail.com, asrinagesh@gmail.com * Corresponding author Received: October 4, 2022 Keywords: BADF, classification, CLAHE, deep learning, pancreatic cancer, segmentation, UNET, medical image processing, image segmentation Among all cancers, pancreatic cancer has a very poor prognosis. Early diagnosis, as well as successful treatment, are difficult to achieve. As the death rate is increasing at a rapid rate (47,050 out of 57650 cases), it is of utmost importance for medical experts to diagnose PC at earlier stages. The application of Deep Learning (DL) techniques in the medical field has revolutionized so much in this era of technological advancement. An analysis of clinical proteomic tumor data provided by the Clinical Proteome Tumor Analysis Consortium Pancreatic Ductal Adenocarcinoma (CPTAC-PDA) at the National Cancer Institute was used to demonstrate an innovative deep learning approach in this study. This includes a) collection of data b) preprocessed using CLAHE and BADF techniques for noise removal and image enhancement, c) segmentation using UNet++ for segmenting regions of interest of cancer. Followed by, d) feature extraction using HHO based on CNN and e) feature selection using HHO based on BOVW for extracting and selecting features from the images. Finally, these are subject to the f) classification stage for better analysis using the VGG16 network. Experimental results are carried out using various state-of-art models over various measures in which the proposed model outperforms with better accuracy:0.96, sensitivity:0.97, specificity:0.98, and detection rate:0.95. Povzetek: Opisana je metoda globokega učenja za napovedovanje raka na ledvicah. 1 Introduction The death rate from pancreatic cancer (PC) in the United States is among the highest of all cancers. Despite aggressive treatment approaches and combination modalities, the 5-year survival rate remains 5%. According to 2017’s SEER data [1], Pancreatic ductal adenocarcinoma accounted for 47,050 deaths and new cases of 57,600 were reported. In 2030, PDAC is expected to overtake cancer as the 2nd largest cause of mortality [2]. Only 15 to 20% of sufferers are qualified for a potentially curative surgery because of non-specific indications and late discovery [3]. Whipple surgery left pancreatectomy and complete pancreatectomy+ are the three surgical options for pancreatic cancer treatment. By analyzing the resection tissues, it will be possible to determine whether or not lymph nodes are metastasizing from the tumor, as well as whether there is pre-invasive pancreatic intraepithelial neoplasia. Further therapeutic management will be based on pathological results [4]. It is important to identify neoplastic cells from benign or inflammatory cells to have a clear picture of the tumor. Because of the tremendous heterogeneity between and within tumors in growth pattern, cytology, and stroma (figure 1), this can be a daunting task. A fibrotic and inflammatory microenvironment contributes to the heterogeneity and complex growth pattern of tumors, with the latter constituting most of the tumor mass [5]. On microscopic examination, PDAC is primarily glandular, with extensive desmoplastic stroma formation. However, other structures can also be observed, including (micro-)papillae, solid nests, cribriform, or small, single-cell tumors [6]. There are several molecular factors associated with the development of non-glandular, histologically poorly differentiated tumor growth patterns, such as mesenchymal phenotypes, proteases, and neutrophil infiltrates [7,8]. PDAC grows in a dispersed pattern. It is in these cases that the tumor cells are not usually grouped, but are instead found in cellular clusters which encroach on the surrounding tissues, nerve 116 Informatica 47 (2023) 115–130 R.P.R. Chegireddy et al. sheaths, and vascular networks [9]. A PanIN (Pancreatic Intraepithelial Neoplasia) is the precursor lesion of PDAC (Figure 2), and it is analogous to ductal epithelial carcinomas in colon cancers, in which ductal cells proliferate to become cancer- invasive. Figure 1: Pancreatic Cancer: MRI image of risk patients. A healthy pancreas and chronic pancreatitis have glandular and ductal features grouped in an organoid- lobular configuration, while a malignant pancreas has tumor glands that are dispersed throughout the stroma, distorted, and display solitary cells [10,11]. Chronic pancreatitis is characterized by fibrosis, ductal tissue loss, and acinar thinning, all of which were linked to an increased risk of invasive carcinoma [12]. PDAC review time for slides with histological microarchitecture, distributed development, varied microclimates, preinvasive lesions, inflammatory tissue, and sealed anatomical tissue is predicted to be 1 to 2 minutes per slide [13]. The time variable is significant for diagnosing, even if the accuracy of diagnosis is high, and it will become even more significant as the overall number of specialist pathologists’ declines, and as the general demand for information and specialization increases, as well as the number of patients [14,15]. Techniques which enable and promote morphological-based tissue slide evaluation and flag crucial regions for further study by professional pathologists are thus necessary. Digital pathology has evolved as a means for evaluating histopathology slides, supporting routine diagnostics and research, as well as ensuring quality control. Reproducible tissue categories are very important in spatial tissue studies. Deep learning methods have previously been demonstrated to be effective in determining lymph node metastases and classifying tumor subsets [16]. 1.1 Research gap By identifying the onset period, pancreatic disease could be reduced from being the leading cause of death. One of the most difficult tasks completed by the radiologist up to this point has been identifying the nodules in the stomach wall. Nodules of the pancreas have diverse shapes and sizes, which makes it difficult to identify small nodules. While segmenting the tumor region, difficulties such as over-segmentation and under-segmentation can develop. While there are many imaging modalities available, using the more reliable and convenient modality is important for early tumor detection. To identify and characterize the tumor's location, scientists have recommended some procedures. The contrast of MRI for soft tissues is better than CT, and it can differentiate fat, water, muscles, and other soft tissues more easily than CT. Additionally, MRI has a higher sensitivity (33%) for detecting tumors than CT (11%). The primary goal of this research is to suggest a better framework that will detect and classify pancreatic cancer from MRI images to support radiologists in making diagnostic decisions. 1.2 Key highlights This article aims to optimize methods and propose a framework for detecting and classifying pancreatic cancer using deep learning and image processing techniques. The primary objectives of this article are as follows: •To suggest a framework based on MRI images to detect and classify pancreatic cancer. •To improve the MRI image quality using Boosted Anisotropic Diffusion Filter (BADF) and contrasted limited adaptive histogram equalization (CLAHE) algorithms. •To use the UNet++ architecture to create a Computer-Aided Detection method (CAD) for the early identification of pancreatic cancers. The pancreatic region associated with a lesion is precisely separated from the MRI image by segmentation using the UNet++. •To extract the best subset of texture features to enhance classification accuracy and to create a classification system based on these texture features using HHO-based CNNs and HHO-based Bags of visual terms. •To distinguish different levels of malignancy in an MRI image by developing a classifier based on the VGG 16 model. •To perform quantitative analysis for various tumor classes and the accuracy of the proposed classifier is assessed against the state-of-the-art work’s performance. Organization of the paper: As we already came across the overview of PDAC and its respective areas in Section 1, part 2 discusses the literature review, third part illustrates the overall methodology adopted. The fourth part presents the performance analysis, and the fifth section summarizes the conclusion. A Novel Method for Human Mri Based Pancreatic Cancer… Informatica 47 (2023) 115–130 117 2 Literature review Tonozoko et al. (2021) [17] developed a Computer- Aided Diagnostics (CAD) approach that used deep learning assessment of EUS pictures (EUS CAD) to distinguish between persons with chronic pancreatitis and those with Pancreatic Ductal Carcinoma (PDAC). Liu et al. (2020) [18] used a CNN to determine whether patches were carcinogenic. According to the fraction of patches designated as carcinogenic by the CNN and the trained and validation datasets, a criterion for identifying pancreatic cancer was created. Researchers utilized a localized test group (101 pancreatic cancer patients and 88 controls, local test group 2) in addition to data from the United States (281 pancreatic cancer patients and 82 controls). In this study, EM algorithms and Gaussian Mixture models were integrated to highlight the most necessary properties of the CT scan, and threshold values were used to determine the percentage of tumors present in the pancreas. Vaiyapuri et al. (2022) [19] introduce an intelligent deep-learning-enabled decision-making medical system for pancreatic tumor classification (IDLDMS-PTC) using CT images. The IDLDMS- PTC model derives an emperor penguin optimizer (EPO) with multilevel thresholding (EPO-MLT) technique for pancreatic tumor segmentation. A MobileNet model is applied as a feature extractor with optimal autoencoder (AE) for pancreatic tumor classification. To optimally adjust the weight and bias values of the AE technique, the multileader optimization (MLO) technique was utilized. Abbas et al. (2021) [20] suggest a Computer Aided Diagnosis (CAD) system that uses Synergic Inception ResNet-V2, a deep convolutional neural network architecture, to identify PC cases from publicly available CT images. This system could extract PC graphical functionality to include clinical diagnosis before the pathogenic examination, freeing up valuable time for disease prevention. To demonstrate the relatively encouraging outcomes in terms of accuracy in recognizing BC-infected patients, simulation results using MATLAB are provided in the study. The suggested deep learning approach achieves an accuracy of 99.23%. Li et al. (2022) [21] offer a deep-learning segmentation technique for pancreatic cancer based on a dual meta-learning framework. This can combine generic tumor data from idle MRIs with prominent tumor information from Ct scan images to improve the discrimination of high-level features. To provide rich intermediate explanations for a meta-learning technique that would follow, the randomized intermediary modality between CTs and MRIs was originally developed to fill in visual gaps. Table 1: Summary of literature review. Author Algorithm Metrics Strength Weakness Tonozoko et al. (2021) [17] AlexNet AUROC – 0.924 Sensitivity – 90.2 Specificity – 74.9 Higher-resolution EUS images are used. Higher sensitivity. Risks and feasibility of EUS imaging. Fu et al. (2021) [19] Inception V3 Accuracy - 0.953 Patch-level and WSI-level approach improves the overall classification accuracy The algorithm recognizes cancer cells mainly from nuclear features. Hence prone to false positive results. Liu et al. (2020) [18] VGG-16 Sensitivity - 0·973, Specificity - 1·000, and Accuracy - 0·986 Achieved an accuracy approaching 99% and missed fewer tumors compared with that of radiologists. Uses CT scans which show less tumor detection sensitivity of 11% compared to MRI (33%). Abbas et al. (2021) [20] ResNet Accuracy - 99.23 The isolateral filter enhances the quality of poor images during preprocessing. Uses CT scans which show less tumor detection sensitivity of 11% compared to MRI (33%). Li et al. (2022) [21] GoogleNet Dice score - 64.94 Dual meta-learning framework for pancreatic cancer using MRI as well as CT. Outperforms state- of-the-art methods based on CT imaging. NA 118 Informatica 47 (2023) 115–130 R.P.R. Chegireddy et al. 3 Methodology This section outlines a novel approach for classifying pancreatic cancers based on the Pancreatic Ductal Adenocarcinoma cohort of the Clinical Proteomic Tumor Analysis Consortium (CPTAC-PDA) dataset. While there are various imaging techniques available, MRI demonstrates improved tumor detection sensitivity, which aids in discovering smaller tumors (Grade I). The novelty of this study is the application of image-enhancing methods and optimization strategies to MRI images to increase the classification accuracy when compared to the state-of-the-art research under discussion. The overall design of the proposed framework is shown in Figure 3, with the steps outlined below. During the pre-processing step, CLAHE and BADF are used to enhance the images obtained from the publically available MRI image collection CPTAC- PDA. A source image is divided into non-overlapping contextual components known as sub-images, tiles, or blocks by the CLAHE method. To balance each contextual area, the CLAHE approach uses histogram equalization. The cropped pixels are then redistributed throughout the grey levels after the original histogram is cropped. While traditional histograms, redistributing histograms cap pixel intensities at a maximum value. By including a Partial Differential Equation (PDE) after it generates the diffuse image, the suggested BADF improves on the existing anisotropic diffusion filter. It's a sophisticated unsupervised machine learning-based image enhancement tool. It's also feasible to smooth details with a diffusion process that's weak at the edges and borders of the images and not only smooths out the image but also preserves important characteristics like edges and patterns. Excellent results were achieved when the number of iterations was set to 20 based on extensive testing. Once images are preprocessed, Segmentation is carried out which is a crucial part of an image classification method where the MRI image is segmented to isolate the nodules. In this work, the UNet++ architecture is used for the segmentation of MRI images. Once segmented regions are obtained, features are extracted and selected by using HHO-based CNN and HHO- based BOVW. After segmentation and feature extraction, the segmented tumor is identified using texture features. Finally, the VGG-16 model is used to distinguish between normal and tumor grades from the MRI images. The Convolution Neural Network (CNN) architecture VGG-16 is one of the best models for image classification which allows transfer learning. Transfer learning is the process of applying the knowledge gained from one problem to another related problem for further improvement. 3.1 Data collection A dataset of CPTAC-PDA pancreatic ductal adenocarcinomas from the National Cancer Institute is included here. Proteogenomic, a large-scale method of studying cancer genetics, is the goal of CPTAC [22]. Figure 2: The overall architecture of the proposed framework. The Cancer Imaging Archives is collecting radiology and pathology images from CPTAC patients to provide researchers with access to these images so they can investigate cancer phenotypes and correlate them with proteomic, genomic, and clinical findings. There is a TCIA Collection for each type of cancer, called CTAC- cancer type, which stores the images for each type. Radiology pictures are compiled from routine imaging conducted on patients immediately before pathology diagnoses, as well as follow-up scans where available. As a result, in terms of scanner modalities, vendors, and acquisition processes, radiology picture information sets are varied. The CPTAC (Figure 4) qualification method includes collecting pathology images. The National Cancer Institute's Clinical Proteomic Tumor Analysis Consortium Pancreatic Ductal Adenocarcinoma (CPTAC-PDA)1 contains 45786 pancreatic images from CPTAC third-phase patients. A total of 45 radiology topics and 77 pathology topics [23] are included. This dataset includes samples from CT, CR, and MRI scans. The pictures are of various sizes, but they were shrunk to 128 in the current work. The flexibility of the answer produced by the diverse qualities of various imaging techniques is increased by using numerous modalities in the training step. A Novel Method for Human Mri Based Pancreatic Cancer… Informatica 47 (2023) 115–130 119 3.2 Preprocessing Preprocessing is carried out for removing noise and anomalies and also thereby enhancing the images for better prediction. So here we use both CLAHE and BADF. They will be compared in Table 1 over measures like PSNR, SSIM, and MSE for better preprocessing analysis over ADF, BADF, AHE, and CLAHE. From this, we get to know that, the higher the PSNR and SSIM, the lower the MSE will give many accurate results. Figure 4. Pathology confirmed pancreatic ductal adenocarcinoma in an elderly female patient. On fat- suppressed LAVA T1-(A) and T2-(B) weighted imaging, C) MRI Cholangio-Pancreatography (MRCP), D) Gadolinium-enhanced images in arterial, E) Portal, F) dela 3.2.1 CLAHE Because the pancreas is related to other organs such as the duodenum and gallbladder, the input volume was enhanced to make the pancreas more visible. To begin, we modified the MRI images by adding a window center (60) and window width (400) to make the abdomen visible. By boosting the contrast of the pancreatic region, the basic dataset was constructed by contrast-limited adaptive histogram equalization (CLAHE) [24-27]. By using the dynamic histogram equalization method, each pixel is mapped to its grayscale neighbors. Because the number of times the approach is used is equivalent to the number of pixels in the area, it consumes a lot of processing resources. CLAHE accomplishes this by establishing a criterion. If part of the picture's grey levels surpasses the threshold, the surplus is dispersed equally among all grey levels. The image will not be over-enhanced as a result of this processing, and the issue of noise amplification will be minimized. 3.2.2 BADF The Perona-Malik Diffusion Process is another name for the anisotropic diffusion filter, and it is named after the people who devised it. It focuses primarily on eliminating noise while maintaining fine features in the image. In general, the filters employ the very same methodology as edge detection. Using multiple blurred pictures generated by the diffusion process, the anisotropic diffusion filtering process may be described. The proposed BADF improves on the previous anisotropic diffusion filter by adding a Partial Differential Equation (PDE) after creating the diffused image. Diffusion, which is absent at the edges and boundary, can be utilized to smooth the surface [28]. After that, four conduction operators obtained from Equations (20) and (21) are used to attenuate the high- frequency elements in each direction. g N= 1 1+( 𝛻 𝑁 𝐼 𝑖 ,𝑗 𝑘 ) 2 (1) g S= 1 1+( 𝛻 𝑠 𝐼 𝑖 ,𝑗 𝑘 ) 2 (2) g E= 1 1+( 𝛻 𝐸 𝐼 𝑖 ,𝑗 𝑘 ) 2 (3) g W= 1 1+( 𝛻 𝑊 𝐼 𝑖 ,𝑗 𝑘 ) 2 (4) K is a scalar that controls the level of smoothness, but it must satisfy (K > 1), because a higher value of K results in smoother outcomes. In a standard anisotropi diffusion filter, K is set to 7. Equation (24) [29] is used to automatically calculate variable K based on local Algorithm 1: CLAHE 120 Informatica 47 (2023) 115–130 R.P.R. Chegireddy et al. statistics in this investigation. k=2*| 𝑚𝑒𝑎𝑛 (𝑓 𝑖 ,𝑗 ) (0.75∗𝜎 (𝑓 𝑖 ,𝑗 )) | (5) Here, the standard deviation is denoted by σ. Using Equation (10), we can determine the variance by smoothing the visuals. 𝐼 𝑖 ,𝑗 = 𝐼 𝑖 ,𝑗 + 0.25[(𝑔 𝑁 ∗ ∇ 𝑁 𝐼 𝑖 ,𝑗 ) + (𝑔 𝑆 ∗ ∇ 𝑆 𝐼 𝑖 ,𝑗 ) + (𝑔 𝐸 ∗ ∇ 𝐸 𝐼 𝑖 ,𝑗 ) + (𝑔 𝑊 ∗ ∇ 𝑊 𝐼 𝑖 ,𝑗 )] (6) where Ii, j is a smoothened image. Algorithm 2: BADF Step 1: Double the size of the input image. Step 2: Diff im is a PDE (partial differential equation) that needs to be initialized. Step 3: Set the pixel distances in the centre. dx = 1; dy = 1; Step 3: Identify four different 2D convolution masks (N,S,E,W). hN = [0 1 0; 0 -1 0; 0 0 0] hS = [0 0 0; 0 -1 0; 0 1 0]; hE = [0 0 0; 0 -1 1; 0 0 0]; hW = [0 0 0; 1 -1 0; 0 0 0]; Step 4: Before evaluating the diffusion function, identify the finite difference. Table 2. Overall analysis under PSNR, MSE and SSIM. Preprocessi ng models PSN R SSI M MS E Ima ge AHE 23.5 6 0.24 3.2 9 ADF 22.8 0.33 4.7 1 Imag e 1 CLAHE 43.9 0.72 8.4 4 BADF 46.2 0.85 9.3 1 AHE 23.5 9 0.24 5 3.3 ADF 22.8 3 0.33 3 4.7 5 CLAHE 43.9 5 0.72 6 8.4 9 Imag e 2 BADF 46.2 7 0.85 1 9.3 6 AHE 23.6 0.25 3.3 2 ADF 22.9 0.34 4.7 9 Imag e 3 CLAHE 44.1 0.73 8.5 BADF 46.3 0.86 9.4 AHE 23.6 2 0.25 7 3.3 7 ADF 22.9 3 0.34 5 4.8 Imag e 4 CLAHE 44.1 2 0.73 8 8.5 3 BADF 46.3 6 0.86 2 9.4 6 AHE 23.6 8 0.26 3.4 ADF 23.2 0.35 4.8 3 Imag e 5 CLAHE 44.1 8 0.74 8.6 BADF 46.4 0.87 9.5 3.3 Segmentation The proposed design is depicted in Figure 5a from a high-level perspective. Unet++ is based on an encoder subnetwork, which will be followed by a decoding subnetwork. Therefore, skip paths (represented in green and blue) connecting the two subnetworks have been reconstructed, and deep supervision distinguishes UNet++ from U-Net [30,31]. This is shown in red. Figure 5: (a) An encoder and a decoder are linked via thick convolutional blocks in UNet++. Before fusion, UNet++ was primarily focused on bridging the semantic gap between encoders and decoders. On the original U-Net are black blocks with thick convolution blocks on skip routes in green and blue, and red deep supervision blocks. (b) A thorough investigation of UNet++'s first skip path. (c) If UNet++ was trained with a lot of supervision, it can be pruned during inference. (Color image from the internet) [33] 3.3.1 Redesigned skip pathways The communication between the encoder and decoder sub-networks has improved thanks to redesigned skip paths. The retrieved attributes from the encoder enhance gain in the decoder in U-Net; The UNet++ A Novel Method for Human Mri Based Pancreatic Cancer… Informatica 47 (2023) 115–130 121 method, however, uses dense convolution blocks, whose number is determined by the pyramid level. Convolution blocks X0, 0, and X1,3, for example, contain three convolution layers. Because it is concatenated, the result of each convolution layer is merged with the reduced dense block result. Through deep convolution, features extracted from the encoder are transformed into feature maps that the decoder can decode. The ideal is considered to have a simpler approach to achieving optimum control issues if the input encoder extracting properties and accompanying decoder feature maps are conceptually equivalent. A summary of the skip path is as follows: Let's call the result of node Xi, j xi, j while I is the encoder's down-sampling value and j is the dense block's convolution layer. The following is how to calculate the stack of extracted features denoted by xi,j: 𝑥 𝑖 ,𝑗 = { 𝐻 (𝑥 𝑖 −1,𝑗 ) 𝑗 = 0 𝐻 ([[𝑥 𝑖 ,𝑘 ] 𝑘 =0 𝑗 −1 , 𝑢 (𝑥 𝑖 +1,𝑗 −1 )]) , 𝑗 > 0 (7) A convolution with an activation function of H(.) and an upsampling layer of U(.). When a node at a level j > 1 is selected, it accepts j + 1 inputs, j inputs representing previous delete paths, and lastly, its output represents the upsampled results of the lesser skip paths. Level j = 0 accepts only input from an encoder layer above it; level j = 1 accepts input from an encoder sub-network at a different stage, and level j > 1 accepts input from a lesser encoder sub-network. Because each skipping route employs a thick convolution block, all previously extracted characteristics blend and reaches the current node. Figure 5b illustrates how the characteristic mappings flow through UNet++'s top skip pathway, which better clarifies Eq. 1. 3.3.2 Deep supervision Deep supervision is provided by UNet++ [30,31] so that the model can run in two modes: (1) accurate mode, where the categorized branches are averaged, and (2) fast mode, where one of the classification branches can be used as the categorization process map, depending on the amount of pruning in the model and the increase in the speed. In rapid mode, selecting a segment branch gives designs of variable complexity, as seen in Figure 5c. With UNet++ one can stack skipping paths with full-resolution attributes on multiple semantic levels, including x0, j, j1, 2, 3, and 4 while being deeply supervised. Each semantic phase is assigned a loss function based on binary cross-entropy and dice coefficient: 𝐿 (𝑌 , 𝑌 ̂ ) = − 1 𝑁 ∑ ( 1 2 . 𝑌 𝑏 . 𝑙𝑜𝑔 𝑌 ̂ 𝑏 + 𝑁 𝑏 =1 2.𝑌 𝑏 .𝑌 ̂ 𝑏 𝑌 𝑏 +𝑌 ̂ 𝑏 ) (8) N is the batch size, and 𝑌 ̂ 𝑏 and Yb is the flattened projected probability and ground truth of the bth image, respectively. The difference between UNet++ and U-Net is shown in Figure 5a which includes: In terms of jump routes, (1) Convolution layers (green) improve gradient flow; (2) Closely packed skip connections on delete routes (blue); and (3) Deep supervision (red) which prevents pruning and, in the worst-case scenario, is similar to the performance of using one loss layer in model 3.3. 3.4 Feature extraction The HHO algorithm, a new metaheuristic stochastic approach proposed by Harris hawks’ behaviors, is a mathematical proposal. Harris hawks' behavior is defined by their ability to track, encircle, and approach potential prey (usually rabbits) and then attack them with excellent synchronization. Surprise pounce is a smart escape technique used in hunting. The HHO technique, like earlier meta-heuristic algorithms [34, 35], includes exploratory and exploitative steps. During the exploration phase, Harris hawks will pursue prey randomly, according to the equation: X(t+1)= { 𝑋 𝑟𝑎𝑛𝑑 (𝑡 ) − 𝑟 1 |𝑋 𝑟𝑎𝑛𝑑 (𝑡 ) − 2 𝑟 2 𝑋 (𝑡 )|𝑞 ≥ 0.5 (𝑋 𝑟𝑎𝑏𝑏𝑖𝑡 (𝑡 ) − 𝑋 𝑚 (𝑡 )) − 𝑟 3 (𝐿𝐵 + 𝑟 4 (𝑈𝐵 − 𝐿𝐵 ))𝑞 < 0.5 } (9) The hawks are placed at X(t + 1), the rabbit (victim) at Xrabbit (t), r1 to r4, and q are sequentially labeled from 0 to 1, Xrand (t) signifies a random selection hawk at a random location, and X m denotes the current hawk population's average location, as computed by Equation (29): X m(t)= 1 𝑁 ∑ 𝑋 𝑖 (𝑡 ) 𝑁 𝑖 =1 (10) Xi(t) is the place of each hawk in iteration t, and N is the total number of hawks. When the knowledge step is finished, a duration occurs between the discovery and exploitation periods. The rabbit's energy should be shaped according to Equation (30) throughout this moment of transition: E=2E 0(1- 𝑡 𝑇 ) (11) where E represents the rabbit's escaping energy, E 0 represents its initial energy state, and T represents the maximum number of iterations. According to the victim's physical condition, the E 0 number could vary from -1 to 1. When E0 approaches -1, the patient loses energy and vice versa. The Harris hawks suddenly approach their victim during the last stages of the algorithm's processing. There are four attack strategies available. r is a probability of escaping in this case. Harris's hawks use a delicate besiege strategy to slowly 122 Informatica 47 (2023) 115–130 R.P.R. Chegireddy et al. encircle the target when E ≥ 0.5 and r ≥ 0.5. The model for mathematical analysis is as follows: 𝑋 𝑖 𝑡 +1 = ∆𝑋 𝑖 𝑡 − 𝐸 |𝐽 𝑋 𝑝𝑟𝑒𝑦 − 𝑋 𝑖 𝑡 |, ∆𝑋 𝑖 𝑡 = 𝑋 𝑝𝑟𝑒𝑦 − 𝑋 𝑖 𝑡 (12) J represents the strength of the prey's bouncing during the escape, which takes a random value between 0 and 2, and individuals present in the presence of prey are separated by a distance of Xi(t + I). The prey can't escape when E 0.5, r 0.5, due to insufficient escaping energy, and the Harris hawks' location is written as: 𝑋 𝑖 𝑡 +1 = 𝑋 𝑝𝑟𝑒𝑦 − 𝐸 |∆𝑋 𝑖 𝑡 | (13) That is E ≥ 0.5, r < 0.5 when Harris hawks soft besiege with escalating quick dive tactics to confuse prey when the prey has the necessary power to effectively flee. It can be expressed in the following way: 𝑋 𝑖 𝑡 +1 = { 𝑌 = 𝑋 𝑝𝑟𝑒𝑦 𝐸 |𝐽 𝑋 𝑝𝑟𝑒𝑦 − 𝑋 𝑖 𝑡 |, 𝑖𝑓 𝑓 (𝑌 ) < 𝑓 (𝑋 𝑖 𝑡 ) 𝑍 = 𝑌 + 𝑆 × 𝐿𝑒𝑣𝑦 (𝑑 ), 𝑖𝑓 (𝑍 ) < 𝑓 (𝑋 𝑖 𝑡 ) } (14) S is a 1 D random vector, where d is the problem dimension. When E < 0.5, r < 0.5, the prey has insufficient escape energy, according to the Lévy Flight function. This prey will be attacked by the Harris hawks in the following ways: 𝑋 𝑖 𝑡 +1 = { 𝑋 𝑝𝑟𝑒𝑦 − 𝐸 |𝐽 𝑋 𝑝𝑟𝑒𝑦 − 𝑋 𝑚 𝑡 |, 𝑖𝑓 𝑓 (𝑌 ) < 𝑓 (𝑋 𝑖 𝑡 ) 𝑍 = 𝑌 + 𝑆 × 𝐿𝑒𝑣𝑦 (𝑑 ), 𝑖𝑓 (𝑍 ) < 𝑓 (𝑋 𝑖 𝑡 ) } (15) After using HHO (Figure 6) for extraction, CNN is added at the end. We believe that the huge original picture lxh is specified as x in the convolutional layer. We begin by training sparse coding to extract the tiny size image from the giant picture. It is necessary to compute the f=(wxs+b) property by computing the activation function and the weights and variances between the explicit and visual layer units. We acquire the matching value f' = (wxs'+ b') for each small picture, as well as the convolution values of these fs' and the matrix of convolution of the properties, for each small image. These qualities must next be categorized after they have been obtained by convolution. 3.5 Feature selection In four steps, the BoW model is explained. To begin, each image of the given image collection is sampled for patches represented by local descriptors. Second, a clustering algorithm generates a visual vocabulary, with each cluster center corresponding to a visual word. Third, a new image's local characteristics can be quantified using the visual vocabulary gathered earlier. Lastly, a BoW histogram is produced for image representation [36,37,38] by collecting the frequency of each bag of visuals in the frame. Figure 6: HHO-based flowchart for feature extraction. As explained in the image, a set of elements from each pixel is moved to a fresh feature space with k characteristics, where k is the number of k-means centroids. Hard-assignment coding was employed to encode the features in this study. The following is an example of a BoW image representation: Provided the visual words BoW in a vocabulary, 𝑋 (𝑊 𝑖 ) = 1 𝑛 ∑ { 1 𝑖𝑓 𝑖 = 𝑎𝑟𝑔 𝑗 𝑚𝑖𝑛 ||𝑊 𝑗 − 𝑃 𝑐 || 0 𝑜𝑡 ℎ𝑒𝑟𝑤𝑖𝑠𝑒 , 𝑛 𝑐 =1 (16) n stands for the number of patches in the image, while pc stands for patch c. Following that, a pictorial representation is constructed using the BoW paradigm and viewed as a "bag" of visual words. At the start, strength profiles are employed to collect the tumor's and the surrounding area's intensity difference. An intensity profile is a vector of picture intensity values calculated by analyzing the brightness A Novel Method for Human Mri Based Pancreatic Cancer… Informatica 47 (2023) 115–130 123 of pixels along the cancer border. The pixels were taken from the center of the tumor to the border of the cancerous area. As seen below, the intensity profile is created. Gaussian kernels smooth the spots at the tumor border to prevent them from being affected by noise, which may cause the boundary normal to shift. The Gaussian kernel is explained as follows in one dimension: 𝐺 1𝐷 (𝑋 ; 𝜎 ) = 1 √2𝛱𝜎 𝑒 −𝑋 2 2𝜎 2 ⁄ (17) To convex with the points on the cancer boundary, the first derivative of GiD (X,) is employed. The standard deviation is σ . In a picture, L(x,y) represents the coordinates of the tumor boundary. Convolution results in the points' coordinates 𝐵 (𝑋 1 , 𝑌 1 ) = 𝑏 (𝑥 , 𝑦 ) ∗ 𝐺 1𝐷 (𝑋 ; 𝜎 ) ′ (18) The border normal's angles are calculated using 𝜃 = arctan ( 𝑦 ′ 𝑥 ′ ) (19) For all the locations correlated with an intensity profile, the angle θ is used as the coordinate. 𝑋 𝑖 = 𝑥 𝑖 + 𝑙 × 𝑐𝑜𝑠 𝜃 𝑖 , 𝑌 𝑖 = 𝑦 𝑖 + 𝑙 × 𝑠𝑖𝑛 𝜃 𝑖 (20) There is a distance l between normal and border sites along the tumor boundary. This is the distance between the location on the border and the normal sites along the tumor boundary. Therefore, the picture's parameters (x1,y1) may not be exact pixel dimensions. Pixels in the picture are located using linear interpolation. Two crucial steps in building a BoW model are patch sampling and local descriptors. To simplify the subsequent computation for each raw patch, the one-dimensional feature vector is created. SIFT descriptors, which are scale and rotation invariant, are a better alternative to raw patches. Two visual vocabularies are created using precompiled patches from cancer and cancer margin regions, accordingly. As a result of this process, the vocabularies formed grow more locally unique. Another way to put it is that visual representation based on a region-specific language is more meaningful than representation based on a universal vocabulary that uses all of the image's data. Patches collected in the margin zone together with the four subregions are mapped to the margin region's vocabulary to generate the image representation in the margin zone. The BoW representation for the margin region is constructed by integrating the BoW histograms for each area. If the vocabulary of the margin sector contains k1 words, the BoW description of the margin area is a vector with 5*k1 dimensions. As a result, the picture now has two BoW histograms: one for the cancer zone and one for the cancer margin. Finally, the recommended region-specific BoW characterization for the malignancy on a pancreatic cancer image is created by joining these two BoW histograms together. 3.6 Classification CNNs are learned in a feed-forward method, with error back-propagation from the classification layer to the first convolutional layer, from the very first input layer to the final classification stage. The following is an example of a forward pass: layer l's neuron I receive input from layer l-1's neuron j: 𝑙 𝑛 𝑖 𝑙 = ∑ 𝑊 𝑖𝑗 𝑙 𝑥 𝑗 + 𝑏 𝑖 𝑛 𝑗 =1 (21) Non-linearity ReLu functions are used to calculate the output: 𝑜𝑢𝑡 𝑖 𝑡 = max (0, 𝑙𝑛 𝑖 𝑡 ) (22) Every neuron in the convolutional and fully connected layers uses equations (2) and (3) to analyze the input and receive the output in the form of nonlinear activation. The pooling layer moves a K x K square window across the N x N feature map and calculates the highest or average value of each variable. As a result, the feature map's spatial size shrinks from N×N to 𝑁 /𝐾 × 𝑁 /𝐾 . Finally, each cancer type's classification probability is calculated using the Softmax function: 𝑜𝑢𝑡 𝑖 𝑡 = 𝑒 𝑙𝑛 𝑖 𝑡 ∑ 𝑒 𝑜𝑢𝑡 𝑘 𝑡 𝑖 (23) The back-propagation algorithm is used to train a CNN by minimizing the following cost function regarding undetermined weights W: 𝑐 = − 1 𝑚 ∑ ln (𝑝 (𝑦 𝑖 |𝑥 𝑖 )) 𝑚 𝑖 (24) The 𝑖𝑡 ℎ sample in the training set with the label yi is 𝑋𝑖 , and the real categorization probability is ((yi|Xi). The mini-batch cost is used to estimate the development costs, and stochastic gradient descent is used to lower the cost function 𝐶 over 𝑁 mini-batches. The weights are then modified in the next iteration as follows, with 𝑊𝑙𝑡 denoting the weights at iteration t for convolutional layer 𝑙 and 𝐶 ̂ denoting the mini-batch cost: 𝛾 𝑡 = 𝛾 [ 𝑡𝑁 𝑚 ⁄ ] 𝑉 𝑙 𝑡 +1 = 𝜇 𝑉 𝑙 𝑡 − 𝛾 𝑡 ∝ 1 𝜕 ∁ 𝜕 𝑊 𝑙 𝑊 𝑙 𝑡 +1 =𝑊 𝑙 𝑡 + 𝑉 𝑙 𝑡 +1 (25) Where is the layer l learning rate, 𝛾 is the scheduled rate that decreases the initial training rate 𝛼 after a certain number of epochs, and 𝜇 is the momentum that determines the effects of earlier modified weights in the most recent edition. Every iteration of training updates the weights of the CNN layers using equation (6). There are 16 layers and 138 million weights that can be learned using the VGG16 framework. Overfitting in the training and development of such deep networks can be caused by the enormous local minima in equation (5). As a result, we needed to use the pre-trained VGG16 dataset to create the weights. For limited datasets, however, determining the right local minima for the cost 124 Informatica 47 (2023) 115–130 R.P.R. Chegireddy et al. function in equation (5) is particularly challenging, resulting in the overfitting of the network. In this case, weights were pre-trained on the VGG16 model [39,40]. VGG16 was fine-tuned on the PDAC dataset after the weights were transferred. This design is discussed in Figure 7, which illustrates the VGG16's thirteen convolutional layers and three fully linked layers. If we use the layer-by-layer fine-tuning technique, adding one layer at a time will result in nineteen layers. It will be essential to use 95 VGG16 designs to fine-tune five-fold cross-validation. If the training duration for each structure is roughly thirty minutes, fine-tuning the VGG16 layer-by-layer will take more than a week. Determining the appropriate parameters for layer-wise fine-tuning will take a similar length of time. The findings were slightly improved with a layer-by-layer fine-tuning method. Figure 7: VGG16 network trainable parameters Based on the pooling layers, the VGG16 architecture can be divided into six blocks. Figure 7 illustrates this approach. The block-wise layout of the VGG16 is depicted in Figure 8. The final fully connected layer of VGG16 generally consists of 1000 neurons that relate to ImageNet classes. According to the classes in the PDAC dataset, the final fully connected layer of this model is made up of three neurons. Figure 8: VGG16 architecture and its respective blocks. 4 Performance analysis The proposed model has trained over 70% of the dataset and 30% for testing under an epoch of 10 and a learning rate of 0.09. The model is implemented using hardware specifications like Ryzen 5/7 series CPU, NV GPU, 1 TB HDD, and Windows 10 OS and software specifications like PyTorch, an open-source python library for developing deep learning models, and Google Collaboratory, an open-source Google environment for building the model. Experimental evaluation is carried over models like Alexnet, Googlenet, Inception v3, VGG19, and Resnet50 over measures like accuracy, sensitivity, specificity, recall, precision, F1-score, detection rate, TPR, FPR, and computation time. Table 2 depicts the overall analysis of various models over 5 image instances under accuracy, sensitivity, and specificity. Figure 9 depicts the graphical representation of various models over the accuracy, sensitivity, and specificity. Table 3: Overall analysis under accuracy, sensitivity, specificity. Models Accura cy Sensitiv ity Specific ity Imag es Alexnet 81 85 87 Google net 84 89 91 Inceptio n v3 88 91 93 Imag e 1 VGG19 87 92 95 Resnet 50 76 81 84 VGG16 96 97 98 Alexnet 81.3 85.4 87.1 Google net 84.6 89.1 91.4 Imag e 2 Inceptio n v3 88.2 91.4 93.3 VGG19 87.4 92.5 95.2 Resnet5 0 76.2 81.4 84.4 VGG16 96.3 97.2 98.2 Alexnet 81.5 85.7 87.3 Google net 84.7 89.4 91.5 Inceptio n v3 88.4 91.7 93.6 Imag e 3 VGG19 87.6 92.7 95.4 Resnet5 0 76.4 81.8 84.7 VGG16 96.5 97.5 98.5 Alexnet 81.8 85.8 87.6 Google net 84.8 89.6 91.7 Inceptio n v3 88.6 91.8 93.8 Imag e 4 VGG19 87.7 92.9 95.6 Resnet5 0 76.7 81.9 84.8 VGG16 96.7 97.7 98.7 Alexnet 82 86 87.8 Google net 85 90 92 Inceptio n v3 89 92 94 Imag e 5 VGG19 87.9 93 95.7 Resnet5 76.8 82 84.9 A Novel Method for Human Mri Based Pancreatic Cancer… Informatica 47 (2023) 115–130 125 0 VGG16 96.9 97.8 98.8 Figure 9: Models vs Measures overall analysis under accuracy, sensitivity and specificity Table 2 depicts the overall analysis of various models under precision, recall, and F1-score. Figure 10 illustrates a graphical representation of various models. Table 4: Overall analysis under precision, recall, F1- score. Models Precisio n Recal l F1- score Images Alexnet 83 74 83 Googlenet 82 78 86 Inception v3 87 82 81 Image 1 VGG19 85 84 87 Resnet 50 79 68 71 VGG16 93 86 89 Alexnet 83.4 74.2 83.4 Googlenet 82.5 78.5 86.1 Image 2 Inception v3 87.2 82.1 81.4 VGG19 85.3 84.3 87.5 Resnet50 79.1 68.2 71.4 VGG16 93.3 86.3 89.2 Alexnet 83.6 74.4 83.6 Googlenet 82.7 78.7 86.4 Inception v3 87.5 82.6 81.6 Image 3 VGG19 85.5 84.4 87.7 Resnet50 79.3 68.5 71.5 VGG16 93.6 86.7 89.5 Alexnet 83.7 74.7 83.7 Googlenet 82.8 78.8 86.6 Inception v3 87.8 82.8 81.7 Image 4 VGG19 85.7 84.7 87.8 Resnet50 79.7 68.7 71.7 VGG16 93.7 86.8 89.8 Alexnet 84 75 84 Googlenet 83 79 87 Inception v3 87.9 83 82 Image 5 VGG19 86 85 88 Resnet50 80 69 72 VGG16 94 87 90 Figure 10: Models vs Measures. Overall analysis under precision, recall and F1-score. Table 3 depicts the overall analysis of various models under detection rate, TPR and FPR. Figure 11 depicts a graphical representation of various models in which the proposed model outperforms at a greater rate. Figure 12 depicts a graphical representation of various models over computation time which will be obtained during the training period. Figure 13 depicts the output instances of segmentation. Table 5: Overall analysis under detection rate, TPR, FPR. Models Detection rate TPR FPR Alexnet 85 82 18 Googlenet 83 81 19 Inception v3 90 87 13 VGG19 86 83 17 Resnet50 78 73 27 VGG16 95 92 8 Figure 11: Models vs Measures. Overall analysis under detection rate, TPR and FPR 126 Informatica 47 (2023) 115–130 R.P.R. Chegireddy et al. Figure 12: Models vs Computation time during the training period. (a) (b) (c) (d) (e) (f) Figure 13. Segmentation output where (a,b,c) depict unhealthy output and (d,e,f) depicts the healthy output using UNET++. 5 Discussion The purpose of this study is to demonstrate the effectiveness of MRI image analysis using the VGG- 16 model with the Harris hawk’s optimization (HHO) algorithm in segmentation and feature selection for pancreatic cancer classification from MRIs. Because MRI provides better contrast between fat, water, muscle, and other soft tissues than CT, it generally has a good spatial resolution compared to other modalities. Conventional MRI has shown a high degree of sensitivity and specificity for the detection of pancreatic tumors based on reviews of previous studies [41] with awareness of the presence of the tumor. The sensitivity of our proposed framework for the detection of pancreatic cancer was 96.34 on the test data set, as well as precision, recall, and F1 score that were considered as high compared to other approaches in the literature discussed in table 2. As a result, our A Novel Method for Human Mri Based Pancreatic Cancer… Informatica 47 (2023) 115–130 127 framework is comparable to the ability of humans to recognize images. In an analysis of 225 asymptomatic patients with a high risk of pancreatic cancer, Canto et al. (2017) [42] found that EUS (Endoscopic Ultrasound Scan) had the highest rates of tumor detection (42%) as compared to CT (11%), and MRI (33%). For tumor detection, Tonozoko et al. (2021) [17] used EUC imaging, which yields a higher sensitivity and can detect smaller tumors (Grade I). However, due to the risks and convenience issues associated with EUS, the MRI appears to be a better method. Hence in our model, we use MRI images for the detection of pancreatic cancer. The classification accuracy of the proposed method is 93.52 as compared to the image classification model of VGG-19 [18] which shows an accuracy of 87.52. According to Fu et al. (2021) [19], the inception model uses nuclear features that lead to false positive results that can be avoided by optimizing the selection of features. In our proposed model we utilized a VGG-16 framework with HHO-based CNNs and HHO-based Bags of visual terms for feature extraction and selection to improve the accuracy even with a smaller number of convolutional layers as compared to the VGG-19 model [18]. In general, unlike computers, the human brain does not perform at its best when fatigued, stressed, or limited in experience, which results in misdiagnosis or overlooking a lesion during an MRI. Artificial intelligence, on the other hand, can consistently provide reliable performance within a very short period, thereby compensating for the limitations of human capability and preventing human errors in clinical practice. As a result, our framework can be useful for both beginners learning MRI, as well as fatigued experts or carelessness caused by individuals who have accumulated a large number of screenings. Additionally, the data set for this study included a variety of images, including those with hazy borders and unclear images, which are frequently seen in clinical exams. These images were then enhanced using the Boosted Anisotropic Diffusion Filter (BADF) and Contrasted Limited Adaptive Histogram Equalization (CLAHE) algorithms to improve the image quality for better accuracy. Therefore, we believe that our system can detect diverse tumors by learning the images and through the utilization of better image-enhancing techniques and optimal feature selection strategies. 6 Conclusion This paper brings an effective yet novel approach for pancreatic cancer detection at an earlier stage using deep learning. For this, initially, MRI data are collected from the popular repository CTAC-PDAC and with the help of CLAHE and BADF, preprocessing is done and then proceeded to segment cancer regions using UNet++. Further, for extracting quintessential features along with selection, the use of both HHO-based CNN and BOVW is done. Finally for effective use of transfer learning VGG16 is performed for detection. The proposed model outperforms better with 0.96% accuracy over state-of-the-art models under various measures. This paper will be helpful for another research specialist to dig deep and get to understand the stages and come up with better integrated and advanced models. References [1] National Cancer Institute (28 February 2021). Surveillance, Epidemiology, and End Results (SEER) Cancer Stat Facts: Pancreatic Cancer. Available online: https://seer.cancer.gov/statfacts/html/pancreas.htm l. [2] Siegel, R.L.; Miller, K.D.; Jemal, A. (2020) Cancer statistics, CA Cancer J. Clinicians. 2020, 70, 7–30. https://doi.org/10.3322/caac.21590. [3] Andersson.R., Vagianos.C.E.; Williamson.R.C. N (2004). Preoperative staging, and evaluation of resectability in pancreatic ductal adenocarcinoma. HPB (Oxford) 2004, 6, 5–12. https://doi.org/10.1080/13651820310017093 [4] Jonathan D Mizrahi, Rishi Surana, Juan W Valle, Rachna T Shroff (2020). Pancreatic cancer. The Lancet, 395, 2008–2020. https://doi.org/10.1016/s0140-6736(20)30974-0. [5] Giada.M.M.(2020). The ambiguous role of the inflammatory micromilieu in solid tumors. Pathology.41,118–123. https://doi.org/10.1007/s00292-020-00837-1. [6] Mayer, P.; Dinkic, C.; Jesenofsky, R.; Klauss, M.; Schirmacher, P.; Dapunt, U.; Hackert, T.; Uhle, F.; Hansch, G.M.; Gaida, M.M. (2018). Changes in the microarchitecture of the pancreatic cancer stroma are linked to neutrophil-dependent reprogramming of stellate cells and reflected by diffusion-weighted magnetic resonance imaging. Theranostics, 8, 13–30. https://doi.org/10.7150/thno.21089. [7] Grosse-Steffen, T.; Giese, T.; Giese, N.; Longerich, T.; Schirmacher, P.; Hansch, G.M.; Gaida, M.M. (2012). Epithelial-to-mesenchymal transition in pancreatic ductal adenocarcinoma and pancreatic tumor cell lines: The role of neutrophils and neutrophil-derived elastase. Clinical and Developmental Immunology. 1-12. https://doi.org/10.1155/2012/720768 [8] Gaida, M.M.; Steffen, T.G.; Gunther, F.; Tschaharganeh, D.F.; Felix, K.; Bergmann, F.; Schirmacher, P.; Hansch, G.M. (2012). Polymer- photonuclear neutrophils promote dyshesion of tumor cells and elastase-mediated degradation of E- cadherin in pancreatic tumors. Eur. J. Immunol. 42, 3369–3380. https://doi.org/10.1002/eji.201242628. [9] Verbeke, C.(2016). Morphological heterogeneity in ductal adenocarcinoma of the pancreas—Does it matter? Pancreatology. 16, 295–301. https://doi.org/10.1016/j.pan.2016.02.004. 128 Informatica 47 (2023) 115–130 R.P.R. Chegireddy et al. [10] Hruban, R.H.; Adsay, N.V.; Albores-Saavedra, J.; Compton, C.; Garrett, E.S.; Goodman, S.N.; Kern, S.E.; Klimstra, D.S.; Kloppel, G.; Longnecker, D.S.; (2001). Pancreatic intraepithelial neoplasia: (A new nomenclature and classification system for pancreatic duct lesions). Am. J. Surg. Pathol.,25, 579–586. https://doi.org/10.1097/00000478- 200105000-00003. [11] Ren, B.; Liu, X.; Suriawinata, A.A. (2019) Pancreatic Ductal Adenocarcinoma and Its Precursor Lesions: Histopathology, Cytopathology, and Molecular Pathology. Am. J. Pathol.,189, 9–21. https://doi.org/10.1016/j.ajpath.2018.10.004. [12] Esposito, I.; Hruban, R.H.; Verbeke, C.; Terris, B.; Zamboni, G.; Scarpa, A.; Morohoshi, T.; Suda, K.; Luchini, C.; Klimstra, D.S.; et al. (2020). Guidelines on the histopathology of chronic pancreatitis. Recommendations from the working group for the international consensus guidelines for chronic pancreatitis in collaboration with the International Association of Pancreatology, the American Pancreatic Association, the Japan Pancreas Society, and the European Pancreatic Club.Pancreatology,20, 586–593. https://doi.org/10.1016/j.pan.2020.04.009. [13] Hanna, M.G.; Reuter, V.E.; Hameed, M.R.; Tan, L.K.; Chiang, S.; Sigel, C.; Hollmann, T.; Giri, D.; Samboy, J.; Model, C.; et al. (2019). Whole slide imaging equivalency and efficiency study: Experience at a large academic center. Modern Pathology.32, 916–928. https://doi.org/10.1038/s41379-019-0205-0 [14] Markl, B.; Fuzesi, L.; Huss, R.; Bauer, S.; Schaller, T. (2020). Number of pathologists in Germany: Comparison with European countries, USA, and Canada. Virchows Arch.478,2,335-341. https://doi.org/10.1007/s00428-020-02894-6 [15] Metter, D.M.; Colgan, T.J.; Leung, S.T.; Timmons, C.F.; Park, J.Y. (2019). Trends in the US and Canadian Pathologist Workforces From 2007 to 2017. JAMA Network Open, 2, e194337. https://doi.org/10.1001/jamanetworkopen.2019.433 7 [16] Jiang, Y.; Yang, M.; Wang, S.; Li, X.; Sun, Y. (2020). Emerging role of deep learning-based artificial intelligence in tumour pathology. Cancer Commun. (Lond.), 40, 154–166. https://doi.org/10.1002/cac2.12012 [17] Tonozuka, R., Itoi, T., Nagata, N., Kojima, H., Sofuni, A., Tsuchiya, T., ... & Mukai, S. (2021). Deep learning analysis for the detection of pancreatic cancer on endosonographic images: A pilot study. Journal of Hepato‐Biliary‐Pancreatic Sciences, 28(1), 95-104. https://doi.org/10.1002/jhbp.825. [18] Liu, K. L., Wu, T., Chen, P. T., Tsai, Y. M., Roth, H., Wu, M. S., ... & Wang, W. (2020). Deep learning to distinguish pancreatic cancer tissue from non-cancerous pancreatic tissue: a retrospective study with cross-racial external validation. The Lancet Digital Health, 2(6), e303-e313. https://doi.org/10.1016/s2589-7500(20)30078-9 [19] Fu, Hao, et al. (2021). "Automatic pancreatic ductal adenocarcinoma detection in whole slide images using deep convolutional neural networks." Frontiers in oncology. 11. 2464. https://doi.org/10.3389/fonc.2021.665929 [20] Abbas, Sabah Khudhair, and Rusul Sabah Obied. (2021). "Novel Computer Aided Diagnostic System Using Synergic Deep Learning Technique for Early Detection of Pancreatic Cancer." Webology 18. Special Issue on Information Retrieval and Web Search. 367-379. https://doi.org/10.14704/web/v18si02/web18105 [21] Li, J., Qi, L., Chen, Q., Zhang, Y. D., & Qian, X. (2022). A Dual Meta-Learning Framework based on Idle Data for Enhancing Segmentation of Pancreatic Cancer. Medical Image Analysis,78. 102342. https://doi.org/10.1016/j.media.2021.102342 [22] Online source: https://www.google.com/url?sa=t&source=web&r ct=j&url=https://wiki.cancerimagingarchive.net/pl ugins/servlet/mobile%3FcontentId%3D21267608 %23content/view/21267608&ved=2ahUKEwjR0t C2- KH3AhVUS2wGHcYYCBAQFnoECA4QAQ&u sg=AOvVaw0pVpGZHP1Z40YdRfI5vBnt. [23] Suman, G., Patra, A., Korfiatis, P., Majumder, S., Chari, S. T., Truty, M. J., ... & Goenka, A. H. (2021). Quality gaps in public pancreas imaging datasets: implications & challenges for AI applications. Pancreatology, 21(5), 1001-1008. https://doi.org/10.1016/j.pan.2021.03.016 [24] Rao, K., Bansal, M., & Kaur, G. (2022). Retinex- Centred Contrast Enhancement Method for Histopathology Images with Weighted CLAHE. Arabian Journal for Science and Engineering, 47(11),13781-13798. https://doi.org/10.1007/s13369-021-06421-w [25] Rodríguez-Pena, A., Uranga-Solchaga, J., Ortiz-de- Solórzano, C., & Cortés-Domínguez, I. (2020). Spheroscope: A custom-made miniaturized microscope for tracking tumour spheroids in microfluidic devices. Scientific Reports, 10(1), 1- 12. https://doi.org/10.1038/s41598-020-59673-1 [26] Sawssen, B., Okba, T., & Noureeddine, L. (2022). A mammographic image classification technique via the Gaussian Radial Basis Kernel ELM and KPCA. Conference:International conference on Mathematics and Computers in Science and Engineering, at H10,spain. [27] Uplaonkar, D. S., & Patil, N. (2021). Ultrasound liver tumor segmentation using adaptively regularized kernel-based fuzzy C means with the enhanced level set algorithm. International Journal of Intelligent Computing and Cybernetics. 15(3),438-453.https://doi.org/10.1108/ijicc-10- 2021-0223 [28] Iima, M., & Le Bihan, D. (2016). Clinical intravoxel incoherent motion and diffusion MR imaging: past, present, and future. Radiology, A Novel Method for Human Mri Based Pancreatic Cancer… Informatica 47 (2023) 115–130 129 278(1), 13-32. https://doi.org/10.1148/radiol.2015150244 [29] Goyal, B., Dogra, A., Agrawal, S., Sohi, B. S., & Sharma, A. (2020). Image denoising review: From classical to state-of-the-art approaches. Information fusion, 55, 220-244. https://doi.org/10.1016/j.inffus.2019.09.003 [30] Long, J., Shelhamer, E., Darrell, T. (2015): Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440. https://doi.org/10.1109/cvpr.2015.7298965 [31] Ronneberger, O., Fischer, P., Brox, T. (2015): U- Net: convolutional networks for biomedical image segmentation. LNCS, vol. 9351, pp. 234–241. Springer, Cham. https://doi.org/10.1007/978-3-319- 24574-4_28 [32] Zhang, L., Shi, Y., Yao, J., Bian, Y., Cao, K., Jin, D., ... & Lu, L. (2020). Robust pancreatic ductal adenocarcinoma segmentation with multi- institutional multi-phase partially-annotated CT scans. Medical Image Computing and Computer Assisted Intervention – MICCAI 2020, 491-500 https://doi.org/10.1007/978-3-030-59719-1_48 [33] Zhou, Z., Rahman Siddiquee, M. M., Tajbakhsh, N., & Liang, J. (2018). Unet++: A nested u-net architecture for medical image segmentation. Deep learning in medical image analysis and multimodal learning for clinical decision support (pp. 3-11). https://doi.org/10.1007/978-3-030- 00889-5_1 [34] Basha, J., Bacanin, N., Vukobrat, N., Zivkovic, M., Venkatachalam, K., Hubálovský, S., & Trojovský, P.(2021). Chaotic Harris hawks optimization with quasi-reflection-based learning: An application to enhance CNN design. Sensors, 21(19), 6654. https://doi.org/10.3390/s21196654 [35] Thaher, T., Heidari, A. A., Mafarja, M., Dong, J. S., & Mirjalili, S. (2020). Binary Harris Hawks optimizer for high-dimensional, low sample size feature selection. Algorithms for Intelligent Systems.251-272. https://doi.org/10.1007/978-981- 32-9990-0_12 [36] Bar, Y., Diamant, I., Wolf, L., Lieberman, S., Konen, E., & Greenspan, H. (2018). Chest pathology identification using deep feature selection with non-medical training. Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, 6(3), 259- 263. https://doi.org/10.1080/21681163.2016.1138324 [37] Chaib, S., Gu, Y., & Yao, H. (2015). An informative feature selection method based on sparse PCA for VHR scene classification. IEEE Geoscience and Remote Sensing Letters, 13(2), 147-151. https://doi.org/10.1109/lgrs.2015.2501383 [38] Huang, M., Yang, W., Yu, M., Lu, Z., Feng, Q., & Chen, W. (2012). Retrieval of brain tumors with region-specific bag-of-visual-words representations in contrast-enhanced MRI images. Computational and mathematical methods in medicine.1-17. https://doi.org/10.1155/2012/280538 [39] Baldota, S., Sharma, S., & Malathy, C. (2021, July). Deep Transfer Learning for Pancreatic Cancer Detection. In 2021 12th International Conference on Computing Communication and Networking Technologies (ICCCNT) 1-7. https://doi.org/10.1109/icccnt51525.2021.9580000 [40] Sehmi, M. N. M., Fauzi, M. F. A., Ahmad, W. S. H. M. W., & Chan, E. W. L. (2021). Pancreatic cancer grading in pathological images using deep learning convolutional neural networks. F1000Research, 10(1057), 1057. https://doi.org/10.12688/f1000research.73161.1 [41] Costache, M. I., et al (2017). "Which is the best imaging method in pancreatic adenocarcinoma diagnosis and staging-CT, MRI or EUS?" Current health sciences journal 43(2).132-136. http://dx.doi.org/10.12865/CHSJ.43.02.05 [42] Canto, Marcia Irene, et al. (2012). "Frequent detection of pancreatic lesions in asymptomatic high-risk individuals." Gastroenterology 142(4) 796-804. https://doi.org/10.1053/j.gastro.2012.02.029 130 Informatica 47 (2023) 115–130 R.P.R. Chegireddy et al.