https://doi.org/10.31449/inf.v48i12.6771 Informatica 48 (2024) 185–194 185 Detailed Cloud Linear Regression Services in Cloud Computing Environment Omer K. Jasim Mohammad*, Mohammed E. Seno, Ban N. Dhannoon University of Fallujah, Al-Ma’arif University College, Al-Nahrain University, Anbar, Iraq E-mail: Omerk.jasim@uofallujah.edu.iq, mohammed.e.seno@uoa.edu, ban.n.dhannoon@nahrainuniv.edu.iq * Corresponding author Keywords: machine learning, liner regression, cloud computing, task scheduling, load balancing, resource allocation Received: July 24, 2024 This paper presents a novel cloud-based machine learning framework centered around a linear regression method known as Cloud Linear Regression (CLR). CLR combines elements of cloud technology and machine learning principles. Furthermore, it explores the connection between cloud task scheduling, distribution, and machine learning methodologies, showcasing how linear regression techniques play a pivotal role in enhancing the cloud environment. CLR demonstrated its effectiveness in dealing with expansive environments that have big data by exhibiting high thorough mining for the best resource predictive accuracy and response times, it has been applied to three scenarios for the best CPU accuracy utilization of the prediction which was (45 %), (53.44 %), and (59.81%) respectively. Such obtained results discovered in this form due to type of cloud infrastructure style (virtually environment). Moreover, CLR offers an efficient remedy for managing resources, including task scheduling, provisioning, allocation, and ensuring availability. CLR obtained the highest performance of (40%) with multitasking resources, (72%) with Memory utilization, (90% with logical Disk utilization, and (30 %) with Bandwidth utilization. Povzetek: Prispevek predstavlja linearno regresijo v oblaku (Cloud Linear Regression, CLR), nov okvir strojnega učenja v oblaku, ki izboljšuje razporejanje nalog in dodeljevanje virov s tehniko linearne regresije, kar optimizira porabo virov in učinkovitost. 1 Introduction The cloud refers to the network of interconnected devices that comprise more than one centralized computing resource [1]. To provide computational skills on a "pay-per-use" basis, cloud computing has emerged as a useful paradigm in recent years. Because of the standardization and evolution of the IT industry. With it is expanding utility cloud computing presents enormous opportunities, but also many challenges to the development of conventional IT [2]. Cloud computing has emerged as an essential technology for organizations to host a variety of IT services, as it provides diverse virtual resources on demand according to a pay-per-use model [3][4]. Cloud computing has become a prominent platform for scientific applications in the present day. Cloud computing aims to facilitate the sharing of large-scale computing, storage, information, and knowledge resources for scientific research. Extensive research has been conducted in the field of cloud computing task scheduling [5]. Cloud service providers (CSPs) utilize massive data centers (DCs) with many servers or physical machines (PMs)[6][7]. Cloud computing manages several virtualized resources, making scheduling a crucial component. An end-user may utilize many virtualized assets for each assignment in the cloud. Therefore, a scheduler manual is not a viable alternative. The purpose of task scheduling is to schedule tasks to reduce time loss and maximize productivity [8]. In cloud data centers, Virtual Machine (VM) consolidation is a procedure that can increase the efficiency of resources with decreased Energy consumption. Virtual machine consolidation is divided into four distinct stages. In the first stage, all overburdened physical machines (PMs) were identified, as performance degradation and poor Quality of Service (QoS) could result from their overload [9]. Pertaining to cloud consumers. Several of the virtual machines on the overburdened physical machine must migrate to a physical machine with a lighter load [10]. To enhance the quality of service (QoS), a multi-step approach is employed. Initially, the focus is on identifying underloaded physical machines and transferring their virtual machines to other PMs, subsequently putting the underloaded PMs into a sleep state. This helps conserve energy by reducing unnecessary consumption in inactive machines. The subsequent step involves selecting appropriate physical machines where all the virtual machines from both overloaded and underloaded PMs can be migrated. Finally, the best candidate virtual machines are chosen for migration from the overloaded physical machines [11]. This paper illustrates new predication model combines between cloud computing technology and Machine 186 Informatica 48 (2024) 185–194 O. K. Jasim et al. learning to enhance resource distribution, consequently, a new prediction Cloud Linear Regression (CLR) model has been proposed to detect whether the virtual machine is overloaded or underloaded. The rest of the parts of the paper structured as follows: Section 2 Reviews of Problem Description. Section 3 shows the Cloud Resource Management. While, Section 4 illustrates a short description about the linear regression Model. Section 5 depicts the Cloud Linear Regression: Methodology and Implementation. The results and discussions have been showed in Section 6. Section 7 discussed in details the feature of execution time. Finally, Section 8 illustrated the conclusions and future exploration area. 2 Problem description In a generalized context, consider a scenario where there exists a set of virtual machines (VMs) associated with cloud customer tasks. Each VM is equipped with multiple resources that possess precedence constraints. The processing of a VM can occur on any available resource within the cloud environment. Each resource within the cloud has a specified capacity, such as CPU, memory, network, and storage. It is important to note that a VM can only be processed on limited resources at a time, and the availability of requests is continuous. This study focuses on the discussion of cloud computing environments, specifically addressing resource allocation and task scheduling scenarios using machine learning methods and introducing a new method as Cloud Linear Regression. Inputs: X: It is given a set x = (x 1, x 2, …, x i, …, x n), where i [1, n] and n is the number of independent requests of users. A given available resource set is R = (R 1, R 2, …, R k, …, R m), in which k [1, m] and m is the number of available resources inside of VMs. Outputs: Y: An efficient CLR (Cloud linear regression) of scheduling, including the assignment of tasks on available resources and makespan. For a resource problem as an example, if it is schedule is {(X1, R2), (X2, R4), (X3, R2), (X4, R4), (X5, R1)}. The challenge lies in the need for careful coordination and optimization of both task scheduling and resource allocation to achieve an overall schedule that is cost-effective and time-efficient. This requires deploying linear regression techniques to the right balance between assigning tasks to appropriate resources and optimizing the utilization of those resources. By effectively coordinating task scheduling and resource allocation, it is possible to achieve an optimized schedule that minimizes costs and reduces the overall time required for task completion. 3 Cloud resource management Cloud computing resources allocated to users grant access to a resource pool within a cloud environment. Depending on their requirements, which encompass factors like performance and cost considerations, users have the flexibility to select their preferred configurations, encompassing choices related to CPUs, memory, and network bandwidth [12][13]. Resource management is a vital subject frequently applied in the coordination of tasks, each with it is distinct applications in problem-solving and decision- making.[14]. Effectively administering the physical resources within their infrastructure poses a significant challenge for providers of cloud infrastructure. Effective resource management necessitates the consideration of numerous factors [15]: 1- Physical resources are allocated and utilized by multiple independent services, making resource sharing an essential aspect of cloud infrastructure management. 2- Levels of service promised by the infrastructure and the service provider. 3- The price tag is attached to using virtualized hardware to power hosted services. The task schedulers in place significantly impact a system's resource utilization and operational costs [16]. There are two primary scenarios where allocation plays a crucial role: initially, the allocation of resources for immediate utilization, and secondly, the allocation of resources for upcoming requests [13]. Two scenarios utilize the liner regression method. The core strategy for optimizing cloud resources often focuses on a specific resource, such as CPU, and a scalability parameter [14]. The challenge associated with utilizing cloud computing resources for services lies in the task of accurately estimating the necessary resource quantity and efficiently allocating them. This challenge manifests on both ends: the client side, where there is an effort to minimize service expenses while still meeting service level agreements (SLAs), and from the perspective of the cloud service provider, where optimizing resource utilization is critical to avoid costly and unnecessary upgrades [17]. 4 Cloud-based machine learning applications Machine learning (ML) can be defined as a subset of artificial intelligence (AI) that empowers software applications to produce more accurate outcomes without explicit, rule-based programming [18]. It is a method that allows a computer to independently acquire knowledge from data [19]. The goal of machine learning is to investigate and navigate a spectrum of possible hypotheses, encompassing various parameter values, to identify the optimal model fit based on the provided data [20]. Machine learning is employed to simulate and streamline the research and forecasting process [21]. Hence, when data features are employed to train machine learning models, they exert a significant influence on model performance [22]. The input for machine learning algorithms consists of data collected and utilized to Detailed Cloud Linear Regression Services in Cloud Computing… Informatica 48 (2024) 185–194 187 forecast future output values. Alongside big data and the field of data science as a whole, machine learning is currently undergoing rapid growth [23]. Machine learning algorithms are primarily categorized into two main groups: supervised, unsupervised, and reinforcement learning [24] Linear regression is a mathematical way of doing predictive analysis and its useful for forecasting variables that are continuous, real, or mathematical[25, 26, 27]. As shown in figure 1, linear regression is a statistical test designed to evaluate and quantify the association between the variables under consideration and its regularly used mathematical research approach in which the projected effects may be measured and modeled against many input variables. It is a data assessment and modeling approach that develops linear connections between dependent and independent variables[28]. Figure 1: Linear regression description Linear regression is an alternative strategy for treating the prediction challenge, which builds relationships with an independent variable x and a dependent output y are to be seen via the linear regression formula sequences in Figure (1). In a real-time environment, various kinds of feedback factors are processed using a greater number of data representations; in this instance, it is the noted much straight-line Regression. Linear recursion (regression) design necessitates this form if the independent variable is X = [X 1, X 2... X n] and the associated and related individual items are described by y. y = 𝛽 0 ∑ 𝛽 𝑥 𝑛 𝑥 =1 𝑋 𝑥 (1) During the training phase, linear regression decides unexplained values for attributes, i.e., ...... βi, resulting in the most appropriate method for stored data points the source provisioning paradigm is used to provide the cloud computing sources, which include the system, memory, and CPU processing capacity. The quantity of source types can be expressed as processing capacity in units of CPU hours, storage space in units of GBs/month, and Internet bandwidth in units of GBs/month. Each Exclusive Machine class in the Virtual Machine repository specifies the number of sources for each source type.[29]. 5 Cloud linear regression: methodology and implementation Integrating linear regression with cloud task scheduling entails integrating the estimation of linear regression ability with the adaptability and scalability of cloud task scheduling systems. To optimize task scheduling in a cloud system environment, the proposal employs a linear regression prediction method under the service level agreement; the enhancement of overloaded hosts utilizes a continuous migration procedure during loading in the environment. The regression model employs a linear function to estimate the prediction function, which enables us to establish the linear correlation between future and current resource usage. By utilizing the prediction function, one can make informed estimates about resource requirements based on the observed patterns and relationships in the data. Figure (2) shows combining linear regression with cloud task scheduling, operations can automate and expedite the entire data analysis procedure. Utilizing the scalability and adaptability of cloud computing resources, this integration enables the efficient scheduling, execution, and monitoring of linear regression tasks. Figure 2: Integrate cloud with linear regression As shown in Figure (2), the workflow of the proposed solution can be summarized as follows: ➢ Detection of VMs that overload using a resource usage procedure by assigning a value (Threshold) to every host. 188 Informatica 48 (2024) 185–194 O. K. Jasim et al. ➢ Linear regression selects the optimal VM and avoid VMs with high overload ➢ Live migration procedure involves transferring an operating virtual machine (VM) from one physical host to another without interruption or delay. 5.1 Experimental environment This section showcases the outcomes of the optimizing resources cloud environment using a linear regression algorithm, presenting the results of the proposed generation environment through tables and figures. To implement the CLR environment, the below requirement must be available: 1. Hardware requirement: This work has been implemented on the Xen server device which has double Processor core i7 GHz and RAM 32 with hard disk 1T G.B. 2. Software requirements: This work has been programmed using C# and Python code language and real environment private cloud computing based on serval platforms for Microsoft (Windows server 2016, SQL 2016, Hyper-V, SCVMM 2016, SCOM 2016). Figure 3: CLR experimental environment As figure (3) shown the CLR environment, the main focus lies on the backbone of the environment, which is the spinal column representing the primary resources (CPU, Memory, Storage, and Bandwidth) that are recorded and monitored at various intervals and under different conditions. 5.2 CLR Steps and algorithm The Cloud Linear Regression machine learning algorithm can be used to estimate a continuous value based on a set of independent variables (requests of users). This is a supervised learning algorithm, which necessitates labeled training data. The main steps algorithm of the CLR model were referred and their steps explained in the first paper [30] The algorithm fits a linear line to the training data and then uses this line to predict the value of the dependent variable for new data. Figure (4) shows the Flowchart of the CLR Model. The flowchart consists three stages, each comprising a set of steps. It starts with receiving the user's resource reservation request and utilizing the dedicated environment. Then, data about the environment and resource utilization ratios for each user's allocated computer is collected to be fed into the linear regression algorithm. The process concludes with the prediction algorithm determining the best available resources that can be provided by the system administrator, aiming to achieve optimal efficiency in the shortest possible time. The main stages of the Cloud Linear Regression (CLR) algorithm are explained in Figure (4). In first stage, the system receives the user's request and analyzes the required data. In the second stage, resources are calculated based on the available data in each VM to make the best selection. The third stage involves the algorithm's execution to choose the best VMs based on the utilized resources within it. The basic concepts of the suggested model are a straight line, mean point, and a dynamic random search. It is entirely predicated on the idea of deploying the greatest number of virtual machines (VMs) in a cloud host. As a result, boost efficiency and lower energy usage in the cloud data center. The CLR model accomplishes numerous tasks, including VM consolidation, migration, placement, and the development of linear regression prediction matter. Detailed Cloud Linear Regression Services in Cloud Computing… Informatica 48 (2024) 185–194 189 Figure 4: Flowchart of CLR model 5.3 CLR environment scenarios The CLR environment consists of a number connected of VMs, each VM has specific resources (CPU, Memory, Bandwidth, and Storage) based on the capacity of the environment. Additionally, various conditions are simulated. As shown in Figure (5), these scenarios are represented. 1. The first set comprises (4) VMs. 2. The second set comprises (7) VMs. 3. The third set comprises (10) VMs Figure 5: CLR environment scenarios In Figure (5), the CLR model applied with various scenarios is illustrated to enhance the service delivery by providing more resources and improved response times. 6 Result and discussion In this section, the values obtained from the CLR environment assigning as inputs to the Linear Regression algorithm, which trains and predicts the best VM among the monitored ones. This step allows for identifying the optimal resources that, in turn, work towards improving the system's environment to meet the users' requirements and SLA. In the following part, the algorithm's operation will be explained based on Figure (6). Figure 6: Implementation of CLR with four VMs 190 Informatica 48 (2024) 185–194 O. K. Jasim et al. ➢ Scenario-1 In this scenario, four VMs (CLR-1…CLR-4) are considered based on the environment's capacity. All these VMs have been installed with equal resources and are monitored, with their data at regular intervals with an equal number of users. Figure (7) illustrate the values of each resource in the CLR environment. Figure 7: Dataset four VMs scenario-1 The Linear Regression algorithm uses the collected data to train and predict the best-performing CLR-VM, which has lower values compared to the others. This selected CLR-VM will be utilized in the next stage to meet the users' needs, providing better efficiency, reduced time, and cost savings. As shown in Figure (8), VM 4 was tested in an exceptionally short period compared to the other VMs because it has the lowest resource utilization. This indicates that it is the optimal choice, closest to the environment's requirements, and provides the highest efficiency and abundance of resources. Figure 8: CLR predicts the best-performing VM ➢ Scenario-2 Using the same method for monitoring and tracing the system's essential resources, data is gathered for a second scenario involving a set of seven VMs. This permit evaluating the system under various conditions and resource allocations while maintaining the same approach concerning time, intervals, and users. Figure (9) illustrate the values of each resource in the CLR environment with 7 VMs. Figure 9: Dataset seven VMs scenario-1 Figure (10), VM 6 was tested in an exceptionally short period compared to the other VMs because it has the lowest resource utilization Figure 10: CLR predicts the best-performing VM Detailed Cloud Linear Regression Services in Cloud Computing… Informatica 48 (2024) 185–194 191 ➢ Scenario-3 As mentioned in Scenario-2, the same steps used in Scenario-1 are applied to Scenario-3, involving a set of ten VMs. The monitoring and tracking procedures are repeated, following the same approach as before, with consistent time intervals and user interactions. Figure (11) illustrate the values of each resource in the CLR environment with 10 VMs. Figure 11: Dataset seven VMs scenario-1 Figure (12) VM 7 Compared to the other VMs, it was evaluated in a very little amount of time due to it is minimal resource use. Figure 12: CLR predicts the best-performing VM The results presented in the previous scenarios demonstrate the algorithm's effectiveness in the cloud environment. Through this process, resources are intelligently distributed, leading to an enhancement in the quality of service by predicting the provisioning of additional suitable resources. This, in turn, improves the quality of the provided service. 7 Execution time While some of the suggested updated scheduling models are acceptable, none of them produces flawless solution. The CLR Model assigns tasks with great responsiveness and offers an effective way to accelerate the execution time. The efficient feature comes from the work style, and dynamic search in the open online environment, of the CLR model. CLR calculates the overall implementing time for the task scheduling and CPU consumption, and tries to transfer or swap the set of tasks from one overloaded virtual machine (VM) to another underloaded VM. In essence, the optimization process is carried out on every cloud virtual machine that is overloaded and is continued until the configuration value may be further improved. Practically speaking, the quality of training, data amount, methodology secluding phases, and type-training model effectiveness all have a significant impact on cloud computing efficiency. Total execution time (ET), on the other hand, is the amount of time required to run the model and associated data set. CLR model to determine CLR execution time based on data size, excluding performance resources (CPU, Memory, Bandwidth, and Logical Disk), and data training quality. Comparatively, the CLR model demonstrated a substantially faster execution time than other benchmark prediction models. In general, CLR greatly reduced EC and decisively outperformed the popular prediction models. Finally, a clever and sustainable cloud environment can be constructed with CLR. 8 Conclusions and future works This study proved the need for machine learning models to enhance cloud resource scheduling, provisioning, and service planning. CLR acted as an innovative cloud machine learning model that utilized the linear regression methodology, which entails both cloud technology and machine learning aspects. The main goal of the CLR model is summarized in optimizing priority the of task scheduling and resource allocation. It is focused on giving classification for each cloud-user with different levels to prioritize their tasks during arranging the tasks in the task queue. The application of CLR extended to three distinct scenarios. The optimal predictive accuracy for CPU utilization was achieved at (45%) for the first scenario, (53.44%) for the second scenario, and (59.81%) for the 192 Informatica 48 (2024) 185–194 O. K. Jasim et al. third scenario. Moreover, when considering multitasking resources, CLR demonstrated superior performance, attaining accuracy percentages of (40%) for CPU utilization, (72%) for Memory utilization, (90%) for Logical Disk utilization, and (30%) for Bandwidth utilization. Furthermore, the CLR has proven it is efficacy in addressing expansive settings characterized by substantial data volumes. It achieves this through extensive mining to attain optimal predictive accuracy and response times for valuable resources. The outcomes obtained from three different scenarios serve as a demonstration of the effectiveness of the linear regression algorithm within a cloud-based environment. This approach facilitates intelligent resource distribution, thereby augmenting service quality through accurate anticipation of the need for additional resources. Consequently, the overall service quality is elevated. The future works that are related to the study summarizes in the testing of CLR depending on interconnected tasks which will certainly affect the performance of the algorithm and may produce different results. Also, add scheduling as a services layer in the cloud infrastructure (ShaaS). References [1] K. Wu, P. Lu, and Z. Zhu, “Distributed online scheduling and routing of multicast-oriented tasks for profit-driven cloud computing,” IEEE Commun. Lett., vol. 20, no. 4, pp. 684–687, 2016. https://doi.org/10.1109/lcomm.2016.2526001 [2] C. Cheng, J. Li, and Y. Wang, “An energy-saving task scheduling strategy based on vacation queuing theory in cloud computing,” Tsinghua Sci. Technol., vol. 20, no. 1, pp. 28–39, 2015, https://doi.org/10.1109/TST.2015.7040511. [3] M. Masdari, F. Salehi, M. Jalali, and M. Bidaki, “A survey of PSO-based scheduling algorithms in cloud computing,” J. Netw. Syst. Manag., vol. 25, no. 1, pp. 122–158, 2017, https://doi.org/10.1007/s10922-016-9385-9 . [4] M. Alizadeh, S. Abolfazli, M. Zamani, S. Baharun, and K. Sakurai, “Review Authentication in mobile cloud computing: A survey,” J. Netw. Comput. Appl., vol. 61, pp. 59–80, 2016, https://doi.org/10.1016/j.jnca.2015.10.005. [5] S. Ghanbari and M. Othman, “A priority based job scheduling algorithm in cloud computing,” Procedia Eng., vol. 50, no. January, pp. 778–785, 2012, https://doi.org/10.1016/j.proeng.2012.10.086. [6] M. Masdari, S. ValiKardan, Z. Shahi, and S. I. Azar, “Towards workflow scheduling in cloud computing: a comprehensive analysis,” J. Netw. Comput. Appl., vol. 66, pp. 64–82, 2016, https://doi.org/10.1016/j.jnca.2016.01.01 . [7] M. N. Cheraghlou, A. Khadem-Zadeh, and M. Haghparast, “A survey of fault tolerance architecture in cloud computing,” J. Netw. Comput. Appl., vol. 61, pp. 81–92, 2016, https://doi.org/10.1016/j.jksuci.2018.09.021 . [8] A. R. Arunarani, D. Manjula, and V. Sugumaran, “Task scheduling techniques in cloud computing: A literature survey,” Futur. Gener. Comput. Syst., vol. 91, pp. 407–415, 2019, https://doi.org/10.1016/j.future.2018.09.014. [9] S. Heidari and R. Buyya, “Quality of Service (QoS)-driven resource provisioning for large- scale graph processing in cloud computing environments: Graph Processing-as-a-Service (GPaaS),” Futur. Gener. Comput. Syst., vol. 96, pp. 490–501, 2019, https://doi.org/10.1016/j.future.2019.02.048 . [10] A. Beloglazov and R. Buyya, “Adaptive threshold-based approach for energy-efficient consolidation of virtual machines in cloud data centers.,” MGC@ Middlew., vol. 4, no. 10.1145, pp. 1890799–1890803, 2010, https://doi.org/10.1145/1890799.189080 . [11] P. K. Upadhyay, A. Pandita, and N. Joshi, “Scaled conjugate gradient backpropagation based sla violation prediction in cloud computing,” in 2019 International Conference on Computational Intelligence and Knowledge Economy (ICCIKE), 2019, pp. 203–208, https://doi.org/10.1109/ICCIKE47802.2019.9004 240. [12] G. Mandal, S. Dam, K. Dasgupta, and P. Dutta, “A linear regression-based resource utilization prediction policy for live migration in cloud computing,” Stud. Comput. Intell., vol. 870, pp. 109–128, 2020, https://doi.org/10.1007/978-981- 15-1041-0_7. [13] F. Chen, T. Xiang, X. Lei, and J. Chen, “Highly efficient linear regression outsourcing to a cloud,” IEEE Trans. Cloud Comput., vol. 2, no. 4, pp. 499–508, 2014, https://doi.org/10.1109/TCC.2014.2378757. [14] W. T. Tsai, G. Qi, and Y. Chen, “A cost-effective intelligent configuration model in cloud computing,” Proc. - 32nd IEEE Int. Conf. Distrib. Comput. Syst. Work. ICDCSW 2012, pp. 400–408, 2012, https://doi.org/10.1109/ICDCSW.2012.46. [15] A. J. Younge, G. Von Laszewski, L. Wang, S. Lopez-Alarcon, and W. Carithers, “Efficient resource management for cloud computing environments,” in International conference on green computing, 2010, pp. 357–364, Detailed Cloud Linear Regression Services in Cloud Computing… Informatica 48 (2024) 185–194 193 https://doi.org/10.1109/GREENCOMP.2010.559 8294. [16] N. Jafari Navimipour and F. Sharifi Milani, “Task Scheduling in the Cloud Computing Based on the Cuckoo Search Algorithm,” Int. J. Model. Optim., vol. 5, no. 1, pp. 44–47, 2015, https://doi.org/10.7763/ijmo.2015.v5.434. [17] P. Nawrocki, M. Grzywacz, and B. Sniezynski, “Adaptive resource planning for cloud-based services using machine learning,” J. Parallel Distrib. Comput., vol. 152, pp. 88–97, 2021, https://doi.org/10.1016/j.jpdc.2021.02.018. [18] “xMemachine learning.” [Online]. Available: https://www.techtarget.com/searchenterpriseai/de finition/machine-learning-ML. [19] Т. M. Mitchell, “‘ Machine Learning’, New York, NY, USA: McGraw-Hill, Inc,” 1997. [20] J. G. Carbonell, R. S. Michalski, and T. M. Mitchell, “An overview of machine learning,” Mach. Learn., pp. 3–23, 1983, https://doi.org/10.1016/B978-0-08-051054- 5.50005-4 . [21] R. S. R. Zahraa Naser Shahweli, Ban Nadeem Dhannoon, “In Silico Molecular Classification of Breast and Prostate Cancers using Back Propagation Neural Network,” Cancer Biology 2017;7(3), https://doi.org/10.7537/marscbj070317.01 . [22] Z. Hussien et al., “Anomaly Detection Approach Based on Deep Neural Network and Dropout,” Baghdad Sci. J., vol. 17, no. 2(SI) SE-article, p. 701, Jun. 2020, https://doi.org/10.21123/bsj.2020.17.2(SI).0701. [23] Alloghani, M., Al-Jumeily, D., Mustafina, J., Hussain, A., Aljaaf, A.J. (2020). A Systematic Review on Supervised and Unsupervised Machine Learning Algorithms for Data Science. In: Berry, M., Mohamed, A., Yap, B. (eds) Supervised and Unsupervised Learning for Data Science . Unsupervised and Semi-Supervised Learning. Springer, Cham. https://doi.org/10.1007/978-3- 030-22475-2_1 [24] R. Saravanan and P. Sujatha, “A state of art techniques on machine learning algorithms: a perspective of supervised learning approaches in data classification,” in 2018 Second international conference on intelligent computing and control systems (ICICCS), 2018, pp. 945–949, https://doi.org/10.1109/ICCONS.2018.8663155. [25] Y. Tokuda, M. Fujisawa, J. Ogawa, and Y. Ueda, “A machine learning approach to the prediction of the dispersion property of oxide glass,” AIP Adv., vol. 11, no. 12, p. 125127, 2021, https://doi.org/10.1063/5.0075425 . [26] V. A. Brei, “Machine learning in marketing: Overview, learning strategies, applications, and future developments,” Found. Trends® Mark., vol. 14, no. 3, pp. 173–236, 2020, https://doi.org/10.1561/1700000065 . [27] D. Q. Zeebaree, H. Haron, A. M. Abdulazeez, and D. A. Zebari, “Machine learning and region growing for breast cancer segmentation,” in 2019 International Conference on Advanced Science and Engineering (ICOASE), 2019, pp. 88–93, https://doi.org/10.1109/ICOASE.2019.8723832. [28] D. Maulud and A. M. Abdulazeez, “A Review on Linear Regression Comprehensive in Machine Learning,” J. Appl. Sci. Technol. Trends, vol. 1, no. 4, pp. 140–147, 2020, https://doi.org/10.38094/jastt1457. [29] G. Baranwal, D. Kumar, Z. Raza, and D. P. Vidyarthi, “Reverse Auction-Based Cloud Resource Provisioning,” Auction Based Resource Provisioning in Cloud Computing, pp. 53–73, 2018, https://doi.org/10.1007/978-981-10-8737- 0_4. [30] M. E. Seno, O. K. J. Mohammad, and B. N. Dhannoon, “CLR: Cloud Linear Regression Environment as a More Effective Resource-Task Scheduling Environment (State-of-the-Art).,” Int. J. Interact. Mob. Technol., vol. 16, no. 22, 2022, DOI: https://doi.org/10.3991/ijim.v16i22.35791 . 194 Informatica 48 (2024) 185–194 O. K. Jasim et al.