https://doi.org/10.31449/inf.v47i1.4429 Informatica 47 (2023) 43–50 43 Implementation of Multiple CNN Architectures to Classify the Sea Coral Images Zainab N. Nemer 1 , Wala'a N. Jasim 2 and Esra'a J. Harfash 1 1 College of Computer Science and Information Technology, University of Basrah, Iraq 2 Department of Pharmacognosy, College of Pharmacy, University of Basrah, Iraq E-mail: zainab.nemer@uobasrah.edu.iq, walaa.jasim@uobasrah.edu.iq , esra.harfash@uobasrah.edu.iq Keywords: corals, deep learning, classification images, image processing, coral sea identification CNN, AlexNet, SqueezeNet, GoogLeNet, Inception-v3, coral classification. Received: September 29, 2022 Image processing and computer vision have a major role in addressing many problems, where images and techniques that are dealt with them contribute greatly to finding solutions to many topics and in different directions. Classification techniques have a large and important role in this field, through which it is possible to recognize and classify images in a way that helps in solving a specific problem. Among the most prominent models that are distinguished for their ability and accuracy in distinguishing is the CNN model. In this research, we have introduced a system to classify the sea coral images because sea coral and its classes have many benefits in many aspects of our lives. The important thing in this work is to study four CNN architectures model (i.e., AlexNet, SqueezeNet, GoogLeNet/ Inception-v1, google Inception-v3) to determine the accuracy and efficiency of these architectures and determine the best of them with coral image data, and we are shown the details in the research paragraphs. The results showed 83.33% accuracy for AlexNet, 80.85% SqueezeNet, 90.5% GoogLeNet and 93.17% for Inception-v3. Povzetek: Predstavljena je uporaba arhitektur konvolucijskih nevronskih mrež (CNN) za razvrščanje slik morskih koral. 1 Introduction There is a growing scientific consensus that earth systems are under unprecedented stress. The human and economic development model developed during the recent industrial revolutions has had a significant impact on our planet. For 10,000 years, the Earth’s relative stability has allowed civilizations to flourish. Over time, industrialization has jeopardized this stability. The United Nations Sustainable Development Goals are another lens to see the challenges facing humanity. Six of the 17 goals are directly related to the environment and human influence: combating climate change, wisely using oceans and marine resources, managing forests, combating desertification, islands reverse land degradation and sustainable development. [1] Effective management depends on ecosystem monitoring, and prompt reporting is necessary to offer timely advice. At the same time, the procedure of gathering underwater data for following the communities which exist under the benthic is greatly aided by digital images. Recent years have seen a tremendous advancement in image recognition technology within artificial intelligence and its various uses in modern society, opening up new technologies and avenues to enhance coral reef monitoring capabilities. Coral reef monitoring is expensive because it requires specialized techniques. Furthermore, due to the remoteness of reefs and diving requirements, long- term data sets are often scattered or spatially constrained. The monitoring method has increased the usage of digital underwater photography over small spatial scales in order to keep costs down [2].In the last year, with the rapid developments in the identification of digital contents, the process of automatic image classification has become the most challenging task in computer vision. In comparison with human vision, the process of comprehending and automatically analyzing images is challenging [3], and as computer vision is a combination of pattern recognition and image processing, the process’ output is image understanding [4]. One of the models that have demonstrated excellent performance in computer vision problems, particularly image classification is the Convolutional neural networks CNNs [5]. Currently, CNN has become one of the most attractive methods, and it is now considered as a final factor in many modern, diverse and challenging applications of machine learning applications, for example: ImageNet object detection challenge, image classification, face recognition. A typical CNN Consist of one or more blocks of sampling layers, then it is followed by one or more fully connected layers (FCL) and an output layer, as in Figure (1). 44 Informatica 47 (2023) 43–50 Z.N. Nemer et al. Figure 1: Convolutional Neural Network The CNN’s central parts are the convolutional layer (conv layer). The Images are static typically in nature. That is, the formation of any one part of the image is the same as the formation of any other part. Then, a feature learned in one region may match a similar pattern in another [6]. The CNN model has several architectures, and below we talk about some of them that were used in this work. The AlexNet is a deep CNN. It is used to successfully outperform the classical image object recognition procedures. Rather than a Sigmoid or Tanh, which represented function and were formerly the accepted standards for traditional CNNs, the AlexNet uses ReLu (Rectified Linear Unit) for the non-linear part. ReLu is given by: f(x) = max (0, x) Three FCLs are placed after five convolutional layers with reducing filter sizes which are connected (sequentially). AlexNet could quickly down sample the intermediate representations with the use of strided convolutions and max-pooling layers. Vectorized convolutional maps are utilized as inputs to a sequence of two FCLs, as depicted in Figure (2) [7,8]. Figure 2: AlexNet arhitecture SqueezeNet can be defined as one of the CNN architectures that has 50 times less parameters compared to AlexNet while maintaining accuracy on par with AlexNet. Also, this work demonstrated the model’s architecture and its application to the ImageNet dataset. The SqueezeNet model employs the following techniques to cut the bulk of parameters: reducing the number of input channels to 3x3 filters, substituting 1x1 filters for 3x3 filters, and down-sampling the network later. Figure (3) shows how the fire module’s convolution filters are organized, with a squeeze convolution layer—which has just 1x1 filters—feeding into an expand layer—which has a combination of 3x3 and 1x1 convolution filters [9,10]. Figure 3: Organization of convolution filters in the fire module. Implementation of Multiple CNN Architectures to Classify… Informatica 47 (2023) 43–50 45 The GoogLeNet is based on the Inception architecture. It is a system that repeats an inception module. From the network’s architecture in Figure 4, it is indicated that there are certain skip connections that, in essence, constitute a mini-module that is replicated across the network. This module was known as an “inception module” by Google. Pooling procedures, spatial convolution, and multiple channel reprojection are all included in each module. Larger convolutional operations (nxn) are split into two convolutional operations with n x1 and n x1 filter sizes. The parameter space is shrunk by two orders of magnitude as a result [11,12,13] Figure 4: An illustration of the layers of GoogLeNet. The Inception-v3 CNN architecture uses Factorized 7 x 7 convolutions, Label Smoothing, and use the auxiliary classifier to transfer label information lower down the network, among other advances (along with using batch normalization for layers in the side head). After that, an FCL is developed on top of the Inception-V3 architecture as a platform for optimizing the process of classification. Convolution layers can learn enough on their own convolution kernel to create the tensor outputs during the model-building process. Additionally, prior to the classification stage, our custom model is concatenated with the individually acquired segmented features. Then it is considered the base of any model because of its capability to get important features that can be utilized in the process of image classification. Figure 5 show the general architecture [14,15]. Figure 5: complete architecture of Inception-v3 The four learning transfer architectures have been trained in this study to test their capacity for identifying images of sea coral, and the accuracy results were provided. The rest of this work is structured as follows: Section 1 presents the introduction, section 2 presents the related works, section 3 presents the working system’s description, section 4 presents experimental results thoroughly, and section 5 presents the discussions and conclusions. 2 Related work Convolutional neural network models can be applied to many topics for the purpose of classification. There are many types of CNN models that can be used for each specific topic, and the following is a set of research in this direction. This study by Sumit Sharan et al. is only based on the challenging but significant Scleractinian (Stony) corals. Further research is done on a suggested method using structural levels like branching corals. The results of the verification show that the testing and training data are nearly identical, demonstrating the capability of the suggested method to accurately predict and learn [16]. S. M. Jaisakthi et al. efforts to automatically recognize and label several types of a benthic substrate using bounding boxes in a given image are introduced as work to monitor coral reefs. In order to recognize and detect various kinds of benthic substrates, an approach based on CNN is given in this research. Since this technique is quicker and more accurate at recognizing objects, they adopted a faster RCNN structure for substrate detection [17]. The classification approach for coral reef images was demonstrated by Zvy Dubinsky et al., and it may be altered to fit other dataset features (number of classes, the size of the dataset, class types, etc.). Also, the study compared several CNN architectures, such as ResNet-50 and VGG-16, and applied transfer learning to the results. There were eleven classes of coral species represented by 5500 images in the ResNet-50 dataset. Here the use of DL is to find out which coral species were most common in the Gulf of Eilat and then link those findings to other ecological factors like water depth or anthropogenic disturbance [18]. Szegedy et al. utilized seven GoogLeNet models in their study. The initialization (and even initial weights, due to oversight) and learning rate policies used for training such models were the same. The main differences between them were the sampling methods they used and the randomness of the input images. The ILSVRC 2014 classification challenge involves placing an image into one of 1000 leaf-node categories in the ImageNet hierarchy. There are around 50,000 validation images, 1.2 million training images, and 100,000 testing images [19]. The purpose of this work, led by Eduardo Tusa and colleagues, is to construct a supervised machine learning-based vision system for coral detections. A bank of Gabor Wavelet filters have been used for extracting texture feature descriptors, and learning classifiers from the OpenCV library have been used to distinguish between non-coral and coral reef. The database 46 Informatica 47 (2023) 43–50 Z.N. Nemer et al. of 621 images (created for this purpose) that depicts Belize’s coral reef: Choose the Decision Trees approach since it performs the most quickly and accurately (110 for training the classifiers, 511 for testing the coral detector) [20]. CNNs, a supervised deep learning technique, are used by Mohamed Elsayed Elawady to offer an effective sparse classification for coral species. Additionally, the researchers experiment with cutting-edge underwater image enhancement, color conversion, and color normalization algorithms while computing Phase Congruency (PC), Weber Local Descriptor (WLD), and Zero Component Analysis (ZCA) Whitening to extract shape and texture feature descriptors that are used as supplementary channels (feature- based maps) with the input coral image’s basic spatial color channels (spatial- based maps).[21] The classification of radiography images using 11 CNN architectures (VGG-19, GoogLeNet, SqueezeNet, AlexNet, Inception-v3, ResNet-18, VGG-16, ResNet- 50, DenseNet-201, ResNet-101, and Inception-ResNet- v2) is presented by Ananda Ananda et al. With the use of CNNs, two classes—normal and abnormal—of wrist radiographs from the Stanford Musculoskeletal Radiographs (MURA) dataset were identified. Different hyper-parameters against accuracy and Cohen’s kappa coefficient were used to compare the architectures. [22] In order to establish a simpler, more effective, and quicker way to automate the classification of corals, the fundamental analysis was explored in the work of Sumit Sharan and colleagues with the use of approaches like CNN and DL. Only the challenging but significant Scleractinian (Stony) corals are used as a basis for this article. Further research is done on a suggested method using structural levels like branching corals. The results of the verification show that the testing and data are nearly identical, demonstrating the capability of the suggested method to accurately predict and learn [23]. In this article, Nurbaity Sabri and colleagues offer a study that contrasts the leaf recognition abilities of basic CNN and pre-trained models AlexNet and GoogLeNet. The use of such classification models has greatly advanced computer vision. This study uses MalayaKew for detecting leaf recognition performance. GoogLeNet exceeds both standard CNN and Alex Net, achieving a flawless accuracy rate of 100%. Because of the several layers in its architecture, GoogLeNet’s processing time is longer than that of the other models [24]. The accuracy of a technique developed by Hopkinson B.M. and colleagues to automatically classify 3D reconstructions of reef sections were evaluated. Locations on 3D reconstruction have been mapped back into the original images to extract various views of the location to produce a 3D classified map. CNNs have been utilized in each method examined for classifying or extracting characteristics from images; however, each method tested differed in the method for combining information from different views of a point into a single classification. Probability averaging, voting and a layer of a learned NN were methods for combining information. [25]-[27] 3 Description of work system The field of artificial intelligence and computer vision has witnessed during these years tremendous developments with regard to digital image processing and in various disciplines, and this development had a major role in addressing many of the issues that images are mainly involved in solving, including medical, industrial, educational and other issues. In any direction, many factors control the quality of the results, including the size of the amount of data, the method used for processing, and the methods of extracting the final results from the analyzed images. n this research, we turned to treating pictures of sea coral and trying to classify them using the method CNN. The following is a review of the most important steps that were followed in this research to read, treat and classify the sea corals. There are many types of coral around the world, and there are some species thrive in warm shallow waters and are close to beaches and coasts, and some are located in the depths of the cold, dark sea. So, there are different types of corals in their characteristics, and in general, coral is classified as either hard or soft coral; there are many known types of hard and soft coral. They are easily distinguished because they are similar to plants, live in colonies, and have a distinctive appearance. For the experiments, we dealt with ten classes of sea corals:(Great Star Coral, Brain Coral, Table Coral, Pillar Coral, Staghorn Coral, Bubble Coral, Sea Pens, Toadstool Coral, Carnation Coral, Gorgonian (Sea Fans). Each class has 50 images. Five of these classes are hard coral, namely:(Great Star Coral, Brain Coral, Table Coral, Pillar Coral, and Staghorn Coral), and the other five are soft coral, namely:(Bubble Coral, Sea Pens, Toadstool Coral, Carnation Coral, Gorgonian). This dataset is compiled accurately and according to accurate specifications of images, and from different sites of the Internet. In the figure 6 samples from each class of the approved coral database. 3.1 The CNN structure of sea coral In this work, we tested four different CNN networks are:(AlexNet, SqueezeNet, GoogLeNet, and inceptionv3) in order to test the efficiency of each net in terms of its ability to classify sea coral data. The input image is of size 250×250×3 and then cropped to the size that is appropriate for each Net model and what it requires. The following is a description of each network that is used here in this classification problem; AlexNet: The architecture of AlexNet consists of 25 layers: • Input data size is [227,227,3] • There are five Convolutional layers. • To extract the most appropriate features, there are three of Max-Pooling layers. • Then two consecutive layers of FCLs, • Then softmax is used here as the activation layer in Implementation of Multiple CNN Architectures to Classify… Informatica 47 (2023) 43–50 47 the last network layer for predictions. • The ReLU activation function, where ReLU is the default activation function, • Also, the Stochastic gradient descent with momentum (SGD) solver is used. SqueezeNet: This model is very common in image classification problems because it gives great accuracy in classification. SqueezeNet architecture consists of 68 layers: • The input size here is 227x227 x3 • a single convolutional layer of an input and output layer • Three of 3x3 max Pooling with stride 2 • The Activation Function depends on the ReLU activation function, implemented between the squeeze and expand layers. • Eight fire modules • The softmax and the SGD optimizer are used here. GoogLeNet: GoogLeNet is one of the important models because it is trained faster. The architecture of this net consists of 144 layers: • Input images of size 224x224x3 • Three of 3x3 max Pooling with stride 2 • Nine Inception models • The ReLU activation function is implemented • SGD optimizers are used • Finally, fully connected and softmax Inceptionv3: In Inception-v3 Architecture, there are 315 layers, and we indicated in this net the Conv comes first, then Batch Norm and ReLU are used after it. The following are some of the properties that apply in this network: • Input images of size 229x229x3 • four of 3x3 max Pooling with stride 2 • Nine Inception models • Two grid size reduction • The ReLU activation function is implemented • SGD optimizers are used • The Finally Fully connected • Then prediction softmax Figure 6: Samples of images of coral dataset 4 Discussion and experimental results The purpose of implementing several CNN architectures is to know and measure their efficiency in Classification problems, especially in sea coral images, and to determine the most efficient ones. We have trained these nets according to the specifications described above. The CNN architectures:(AlexNet, GoogLeNet, SqueezeNet inceptionv3) are trained on ten classes of coral images. The results obtained with these four CNN models are very encouraging, and the error accuracy of the total results of all ten classes of the coral is shown in Table (1) for 30 epochs. Table 1: Pretrained deep learning models. Network Accuracy validations (Top_1) AlexNet 83.33 SqueezeNet 80.85 GoogLeNet 90.5 inceptionv3 93.17 48 Informatica 47 (2023) 43–50 Z.N. Nemer et al. These conventional accuracies represent the Top_1, which means the expected answer (the highest probability). All the architectures show important accuracies, but the inception v3 and GoogLeNet achieved higher average accuracy than AlexNet and SqueezeNet. The elapsed time of training of each net is calculated and distributed as in Figure (7). As we can see from the figure, there is a clear difference in the time that each network spends in the training phase with the stability of the epoch number. Note that inception v3 had the highest training time, although it was the highest accuracy. With every architecture that is trained, we measure the accuracy of each of ten categories in order to determine the success rate of each type of coral, and the accuracy was measured by relying on Top_5 accuracy (the highest probability answers which should match the expected answer). Table (2) shows the details of the accuracy of each class with each architecture. As is known, the Top_5 method always gives a higher predictor of accuracy, as is evident in Table (2), but from the point of view of careful observation, we find that the Great Star and Sea pens coral are almost better with every architecture. Figure 7: The total training time of architectures. With every architecture that is trained, we measure the accuracy of each of ten categories in order to determine the success rate of each type of coral, and the accuracy was measured by relying on Top_5 accuracy (the highest probability answers that must match the expected answer). Table (2) shows the details of the accuracy of each class with each architecture. As is known, the Top_5 method always gives a higher predictor of accuracy, as is evident in Table (2), but from the point of view of careful observation, we find that the Great Star and Sea pens coral are almost better with every architecture. Table 2: Accuracy of each coral class with each network. Name of coral AlexNet Squeeze net GoogLeN et inceptionv 3 Great Star Coral(hard) 0.9583 0.9916 0.9360 0.9498 Brain Coral (hard) 0.9000 0.9429 0.9513 0.9352 Table Coral (hard) 0.9250 0.9958 0.9017 0.9345 Pillar Coral (hard) 0.9333 0.8941 0.9584 0.9301 Staghorn Coral (hard) 0.8333 0.9428 0.9527 0.9445 Bubble Coral(soft) 0.8667 0.9306 0.9962 0.9299 Sea Pens (soft) 0.9167 0.9958 0.9399 0.9358 *Toadstool Coral (soft) 0.908 3 0.8857 0.9656 0.9257 Carnation Coral (soft) 0.875 0 0.9875 0.9855 0.9076 Gorgonian(soft) 0.883 3 0.9428 0.9236 0.9327 For another test, we trained the architects separately on each type of coral, i.e., hard and soft. This experiment aims to measure each architecture's efficiency in identifying the classes of each type. Table (3) shows the overall results in this case. It is noticeable here that the accuracy error of identifying the classes of each type (hard& soft) was better, but the accuracy of soft type in all the architectures is a certain percentage higher than hard type. Table 3: Accuracy of each type of coral. Network Accuracy of hard coral Accuracy of soft coral AlexNet 89.33 90 SqueezeNet 86.83 88.33 GoogLeNet 93.33 95 inceptionv3 96.0 96.67 5 Conclusion In this research, we have introduced work with Multiple CNN architectures (AlexNet, SqueezeNet, GoogLeNet, inception v3) to classify the sea coral images. The point of view of this work is to know and study the ability of each of architectures in classification problem, especially with this type of image. In this work, we want to know the possibility of classification of sea coral images by adopting these classification models. We hope at the same time that this work will have a role in clarifying the efficiency and ability of each of these CNN architectures to make it easier to choose any of them according to the data being processed. Then, what distinguishes this work is the in-depth research to reach results that give a decisions in to directions:first determine the efficiency level of the various CNN archtictures ,each separately ,second Implementation of Multiple CNN Architectures to Classify… Informatica 47 (2023) 43–50 49 ,classifying marine coral and obtaining the best reselts here ,as clarified in the previous paragraph and also in this part. In this system adopts ten types of sea coral, five of which are for the hard coral type and the other five for the soft coral type. Two tests were carried out. In the first test, training of each net (each one separately) on all the ten coral classes, and the final results indicate the high efficiency of all the architectures in classifying images as Coral, but GoogLeNet and Inception v3 generally recorded better results. The error accuracy with GoogLeNet is (90.5%) and with Inception v3 (93.17%). This is because the GoogLeNet and Inception v3 have distinct architectures in terms of design compared with the rest. They are deeper networks, so their results are generally more accurate. In the second test, we trained the four nets on each type of coral separately, that is, hard and soft coral, and the results obtained from this test indicated the high efficiency of the four architectures in classification. The GoogLeNet and Inception v3 were also distinguished by relatively higher results than the AlexNet and SqueezeNet, the accuracy of the error with the hard type was (93.33 %) with GoogLeNet and (96%) with Inception v3. And with the soft type was (95%) with GoogLeNet and (96.67%) with Inception v3. Although the results presented in this paper are very impressive and are sufficient for what we were aiming of this research, some issues may hinder obtaining higher results in this work, including the limited number of images adopted. We believe that if the number of coral images was much greater, the results would have been much higher accuracy. Also, GoogLeNet and Inception v3 take longer time compared to the other models, AlexNet and SqueezeNet, because the number of layers is high in its architecture, especially with Inception v3 Finally, we have tried highlighting the power of CNN models in recognizing the coral images by choosing these four different Architectures. Although all these nets take execution time on the CPU (especially Inception v3), and of course, this time increases with the number of cycles, they are very powerful discrimination models. References [1] Celine Herweijer, Dominic Waughray,” Harnessing Artificial Intelligence for the Earth”, PwC and Stanford Woods Institute for the Environment, January,2018. [2] Y. Manuel González-Rivero, Oscar Beijbom, Alberto Rodriguez-Ramirez and Dominic E.,” Monitoring of Coral Reefs Using Artificial Intelligence: A Feasible and Cost- Effective Approach’’, Sensing, Volume 12, Issue 3, p.489. , 2020, https://doi.org/10.3390/rs120 30489. [3] Muthukrishnan Ramprasath, ‘Image Classification using Convolutional Neural Networks’ ,International Journal of Pure and Applied Mathematics,Volume 119,No. 17, pp.1307- 1319,2018. [4] Wiley Victor, and Thomas Lucas. “Computer vision and image processing: a paper review.” International Journal of Artificial Intelligence Research 2.1,pp. 29- 36,2018, https://doi.org/10.29099/ijair.v2i1.42. [5] Wu, Jianxin. “Introduction to convolutional neural networks.”,National Key Lab for Novel Software Technology, Nanjing University, China,Vol. 5, no. 23, p. 495,2017. [6] F Sultana, A Sufian, P Dutta.,2018, November. Advancements in image classification using convolutional neural network. In 2018 Fourth International Conference on Research in Computational Intelligence and Communication Networks (ICRCICN),,IEEE,pp. 122-129, https://doi.org/10.1109/ICRCICN.2018.8718718 [7] Grm, Klemen, Vitomir Struc, Anais Artiges, Matthieu Caron, and Hazım K. Ekenel, “Strengths and weaknesses of deep learning models for face recognition against image degradations.”,Iet Biometrics vol 7, no. 1,pp. 81-89,2018, https://doi.org/10.1049/iet-bmt.2017.0083 [8] Shadman Q. Salih, Hawre Kh. Abdulla,’ Modified AlexNet Convolution Neural Network For Covid-19 Detection Using Chest X-ray Images’, Kurdistan Journal of Applied Research (KJAR),Vol. 5,No.1 ,pp. 119-130,2020, https://doi.org/10.24017/covid.14 [9] Forrest N. Iandola, Song Han and Matthew W. Moskewicz, ‘squeezenet: alexnet-level accuracy with 50x fewer parameters and <0.5mb model size’, Computer Vision and Pattern Recognition (cs.CV); Artificial Intelligence, arXiv preprint arXiv:1602.07360, 2016. [10] Ali Ahmed,’ Pre-trained CNNs Models for Content based Image Retrieval’,International Journal of Advanced Computer Science and Applications, Vol. 12, No. 7,pp.200-206, 2021,https://doi.org/10.14569/ijacsa.2021.0120723 [11] Gomez-Ríos A., Tabik S., Luengo J., Shihavuddin A.S.M., Krawczyk B. and Herrera F.,’ Towards highly accurate coral texture images classification using deep convolutional neural networks and data augmentation’. Expert Systems with Applications, 118,pp.315-328,2018, https://doi.org/10.1016/j.eswa.2018.10.010 [12] Nur Azida Muhammad, Amelina Ab Nasir and Zaidah Ibrahim,’ Evaluation of CNN, AlexNet and GoogLeNet for Fruit Recognition’,Indonesian Journal of Electrical Engineering and Computer Science Vol. 12, No. 2,,pp.468- 475,2018,https://doi.org/10.11591/IJEECS.V12.I2.P P468-475 [13] Sa Inkyu, Zongyuan Ge, Feras Dayoub, Ben Upcroft, Tristan Perez, and Chris McCool, ‘Deepfruits: A fruit detection system using deep neural networks.’, sensors Vol16, no. 8,p. 1222, 2016,https://doi.org/10.3390/s16081222 [14] Nivrito, A. K. M., Md Wahed, and Rayed Bin, ‘Comparative analysis between Inception-v3 and other learning systems using facial expressions detection.’, PhD diss., BRAC University, 2016. 50 Informatica 47 (2023) 43–50 Z.N. Nemer et al. [15] Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., & Wojna, Z.’ Rethinking the inception architecture for computer vision’, In Proceedings of the IEEE conference on computer vision and pattern recognition,(pp. 2818-2826), 2016,https://doi.org/10.1109/CVPR.2016.308. [16] Sharan, S., Harsh, H., Kininmonth, S., & Mehta, U. (2021). Automated cnn based coral reef classification using image augmentation and deep learning. International Journal of Engineering Intelligent Systems, Vol.29, no.4, pp.253–261,2021. [17] Jaisakthi S.M., Mirunalini P., Aravindan C., ‘Coral Reef Annotation and Localization using Faster R- CNN’. InCLEF (Working Notes),Jan, 2019. [18] Raphael, A., Dubinsky, Z., Netanyahu, N. S., & Iluz, D.,’ Deep Neural Network Analysis for Environmental Study of Coral Reefs in the Gulf of Eilat (Aqaba)’. Big Data and Cognitive Computing, Vol.5, no.2, pp.19., 2021, https://doi.org/10.3390/BDCC5020019. [19] Szegedy C., Liu W., Jia Y., Sermanet P., Reed S., Anguelov D.,... & Rabinovich, A.,2015,’ Going deeper with convolutions.’ In Proceedings of the IEEE conference on computer vision and pattern recognition,pp.1-9, https://doi.org/10.1109/CVPR.2015.7298594 [20] Tusa Eduardo, Alan Reynolds, David M., Lane, Neil M.,Robertson Hyxia V., and Antonio Bosnjak, ‘Implementation of a fast coral detector using a supervised machine learning and gabor wavelet feature descriptors.’, In 2014, IEEE Sensor Systems for a Changing Ocean (SSCO)., pp. 1-6, IEEE, 2014. https://doi.org/10.1109/SSCO.2014.7000371 [21] Elawady M.,’ Sparse coral classification using deep convolutional neural networks.’, A Thesis Submitted for the Degree of MSc Erasmus Mundus in Vision and Robotics (VIBOT), Department of Computer Architecture and Technology University of Girona, 2014. [22] Ananda A., Ngan K.H., Karabag C., Ter-Sarkisov A., Alonso E. and Reyes-Aldasoro C.C., ‘Classification and visualisation of normal and abnormal radiographs; a comparison between eleven convolutional neural network architectures.’, Sensors, Vol. 21,no.16, p.5381, 2021, https://doi.org/10.1101/2021.06.16.21259014 [23] Sharan S., Harsh H., Kininmonth S., & Mehta, U.,’ Automated cnn based coral reef classification using image augmentation and deep learning.’, International Journal of Engineering Intelligent Systems, Vol. 29,no. 4,pp.253-261,2021, [24] Sabri N., Aziz Z.A., Ibrahim Z., Rasydan M.A. and Hafiz A.,’ Comparing convolution neural network models for leaf recognition.’ International Journal of Engineering & Technology,7.3.15,p.141-144,2018, https://doi.org/10.14419/IJET.V7I3.15.17518 [25] Hopkinson B.M., King A.C., Owen D.P., Johnson-Roberson M., Long M.H. and Bhandarkar, SM,,’ Automated classification of three-dimensional reconstructions of coral reefs using convolutional neural networks. ‘,PloS one, Vol.15, no.3,p.e0230671,2020, https://doi.org/10.1371/journal.pone.0230671 [26] Wala’a, N. J., & Rana J. M., (2021). A Survey on Segmentation Techniques for Image Processing, Iraqi Journal for Electrical and Electronic Engineering, vol. 17 , pp. 73-9, https://doi.org/10.37917/ijeee.17.2.10. [27] Nemer, Z.N., 2022. Hand Gestures Detecting Using Radon And Fan Beam Projection Features. Informatica, 46(5). https://doi.org/10.31449/inf.v46i5.3744