https://doi.org/10.31449/inf.v47i7.4799 Informatica 47 (2023) 51–62 51 Clarity Method of Low-illumination and Dusty Coal Mine Images Based on Improved AMEF Chang Su 1, 2 , Ziqiang Li 1, 2 * , Zhongliang Wei 3 , Naizhong Xu 4 and Quan Yuan 1, 2 1 State Key Laboratory of Mining Response and Disaster Prevention and Control in Deep Coal Mine, Anhui University of Science and Technology, Huainan, China 2 School of Mechanical Engineering, Anhui University of Science and Technology, Huainan, China 3 School of Computer Science and Engineering, Anhui University of Science and Technology, Huainan, China 4 School of Mining, Anhui University of Science and Technology, Huainan, China E-mail: suchanguser@126.com, 18697534092@163.com, zhlwei@aust.edu.cn, miningearth@126.com, yq2331591242@163.com *Corresponding Author Keywords: artificial multiple-exposure image fusion, S-type function, gradient domain guide filter, homomorphic filtering Received: April 14, 2023 The existing image processing methods based on physical models can have a significant impact on defogging performance due to inaccurate estimation of the depth of field information. These methods often encounter problems such as low brightness, invisible color distortion, and loss of detail when processing images with poor lighting conditions, such as those taken in coal mines. To address these issues, this paper proposes a new algorithm based on artificial multi-exposure image fusion. The proposed method performs global exposure on images with uneven illumination by combining S-type functions and the Contrast Limited Adaptive Histogram Equalization (CLAHE) algorithm in the Hue-Saturation-Value (HSV) color space. This reduces the spatial dependence of brightness during processing and avoids color distortion problems that may arise in the Red-Green-Blue (RGB) color space. To mitigate the issue of detail loss, a gradient-domain guided filter is used to preserve fine structures in images, while an improved homomorphic filtering algorithm is introduced during the Laplacian pyramid decomposition process to reduce image content loss arising from large dark areas. This paper also conducted subjective, objective, and computational time comparisons to evaluate performance, providing reliable results regarding speed, quality, and reliability in processing hazy images. Povzetek: Predlagana metoda umetne združitve več posnetkov omogoča boljše odstranjevanje meglice v slikah z neenakomerno osvetlitvijo. Uporablja se algoritem CLAHE v barvnem prostoru HSV in vodeni filter za ohranjanje podrobnosti. Rezultati kažejo visoko kakovost in hitrost obdelave zamegljenih slik. 1 Introduction The intricate conditions present in coal mines generate numerous external environmental factors such as fog, low illumination, and glare that negatively impact the quality of instrument-captured images. This decline in image quality not only jeopardizes the efficiency and advancement of automated underground operations but also poses significant safety risks for staff. Unfortunately, despite ongoing research efforts, effective image dehazing methods for these types of images remain lacking. In recent years, numerous methods for image dehazing and enhancement have been proposed. Shi et al. [1] utilized the normalized gamma correction and contrast limited adaptive histogram equalization (CLAHE) to process the luminance component and enhanced images using color correction based on grey world theory. Zhi et al. [2, 3] introduced an adaptable S-type function to adjust the luminance component of images with uneven illumination, which significantly improved image clarity and visual effect. Meanwhile, He et al. [4] proposed the dark channel prior (DCP) method for foggy image processing but its long processing time and low brightness led to color distortion. Qin et al. [5] combined the quadtree algorithm with DCP for optimized transmission, resulting in significant defogging effects without color distortion. Similarly, He et al. [6] employed a guided filter (GIF) to obtain fine transmission maps but acknowledged its better performance in unique environments like low illumination. Meng et al. [7] proposed a boundary constraint and context regularization method to optimize transmission, achieving defogging for foggy images albeit with generally darker results. Zhu et al. [8] constructed a nonlinear depth-of-field equation using color attenuation prior (CAP) to obtain transmission maps that only apply to outdoor images with good lighting conditions. Berman 52 Informatica 47 (2023) 51–62 C. Su et al. et al. [9, 10] modeled pixels in hazy images as Red, Green, Blue (RGB) color space lines passing through air light coordinates termed ‘haze lines’, estimating transmission based on pixel position along the line it belongs to. Ehsan et al. [11] used dual channel transmission combined with DCP and gradient-domain guided filter to refine dual transmission maps, while Galdran [12] employed the artificial multi-exposure image fusion (AMEF) algorithm to remove dust and fog, needing no inversion of physical imaging models to restore clear images. Finally, Yang et al. [13] estimated atmospheric light in non-uniform illumination at night based on superpixel, successfully realizing night image dehazing. Li et al. [14] and Ullah et al. [15] applied convolutional neural network (CNN) models, called Light Dehaze and the adversarial neural network based on Pix2pix Framework Dehaze (BPFD), respectively, to dehaze images and generated corresponding networks using datasets. However, these methods have complex structures, requiring numerous datasets to improve accuracy. Currently, many image processing methods are designed for outdoor images with good illumination and a single environment. However, these methods may not be as effective for underground images with low illumination and fog. This paper proposes the use of the AMEF method for clearing coal mine underground images, which avoids the complicated process of obtaining and refining atmospheric light. The use of transmittance significantly shortens running time and meets the real-time and visibility requirements for underground image processing. The main objective of this paper is to enhance the quality of underground coal mine images while ensuring no color distortion or excessive contrast enhancement. In summary, the paper's contributions can be distilled into three key aspects: Firstly, the paper processes the brightness component in the HSV color space of the exposure image set using CLAHE and a new S-type function to minimize the impact of adverse lighting conditions on the image processing. This approach compensates for the shortcomings of direct operation in RGB color space, such as color distortion and excessive contrast enhancement. Secondly, a new method for calculating weights is proposed, which utilizes the excellent edge preservation performance of gradient domain guided filters to preserve the fine structure in the image. Finally, the paper introduces a modified homomorphic filtering algorithm into Laplacian pyramid multi-exposure image decomposition to improve the quality of blurred images by enhancing detailed content through dual-domain transformation. The proposed method not only removes fog interference but also enhances the contrast of degraded images. The rest of the paper is as follows: The second section provides an overview of related work on atmospheric scattering models and artificial multiple-exposure image fusion. The third section describes the proposed method and improvements made. Section 4 demonstrates the effectiveness of the proposed method through experimental processing of actual underground coal mine images and comparison with other algorithms, with subjective vision and objective evaluation as criteria. Finally, Section 5 presents the conclusion. 2 Related works 2.1 Atmospheric scattering model During the process of light scattering, the direction of light is subject to deviation due to the scattering effect of solid particles and liquid droplets present in the air, resulting in a change in the intensity of light. In the subterranean environment of coal mines, the influence of dust and fog on images is more severe compared to the atmospheric environment. Nonetheless, the fundamental principle remains the same and can be analyzed by employing the atmospheric scattering model, which is the primary physical model used to describe fog in images. This model is expressed mathematically by Equation (1): I(x)=t(x)J(x)+A(1-t(x)) (1) Where I(x) represent the image affected by fog and dust, while J(x) is the clear image without fog. The transmittance, t(x), is inversely proportional to the depth of field, and A is the atmospheric light intensity. Typically, prior information is used to infer t(x) and A. By transforming Equation (1), an expression for the restored image can be obtained. J(x)= I(x) -A t +A (2) 2.2 Artificial multiple-exposure image fusion The objective is to establish a spatially varying image enhancement technique capable of eliminating the visual effects of fog using the Artificial Multiple Exposure fusion (AMEF) techniques proposed by Galdran [12]. This technique eliminates the need for estimation of transmitted and airborne light in Equation (1). To accomplish this, the input blur image I(x) with intensity values ranging from 0 to 1 is considered. In order to solve the image dehazing problem, any solution J(x) must contain intensity values lower than I(x). This can be achieved by rearranging Equation (1) as follows. t= A-I(x) A-J(x) (3) Since t(x) [0,1], it follows from Equation (3) that A- I(x) ≤A-J(x), and concludes that J(x) ≤I(x) ∀x. Therefore, AMEF can use a simple and effective multiple exposure fusion strategy to fuse the information of a group of multi- exposure versions of the original image I(x), and obtain a clear and fog-free image of I(x). Artificial multi-exposure image correction was carried out by gamma correction [16]. The Gamma transform equation is: J(x)=α ⋅I γ (x) (4) Where α and γ are the gamma correction parameters, Clarity Method of Low-illumination and Dusty Coal Mine Images… Informatica 47 (2023) 51–62 53 it is an actual positive number. In certain cases, the dehazing outcomes may turn out excessively dark when the input hazy image contains insufficient areas with desirable contrast, especially in the case of overexposed images. Therefore, the CLAHE [17] method is usually Figure 1: The flow chart of AMEF defogging. applied to append the enhanced contrast version of the original image to the initial set of artificially underexposed images. If the dimension of the source image E k(x) is m×n, the Laplacian pyramid mixture can be expressed as follows: J(x)= ∑ us (m, n) N i=1 [∑ P k i (x)W k i (x) K k=1 ] (5) Where us (m, n) is the operator that up-samples any given image to E k dimension. In this paper, operator P k i =E k i is defined as the result of the Laplacian pyramid construction for each image. W k i is obtained through construct a Gaussian pyramid for the weight maps W k of the fog free region in each image. 𝑊 𝑘 is acquired for every underexposed image through the multiplication and consolidation of both the contrast and saturation maps. Loss of contrast and saturation is one of the primary visual effects of fog [18, 19]. In the AMEF method, the contrast H k(x) at each pixel x in a given source image E k ( x) ={E k R ,E k G ,E k B } is measured as the absolute value of the response to a simple Laplacian filter. The saturation S k(x) at each pixel is estimated from the standard deviation of the color channel: H k (x)= ∂ 2 E k ∂x 2 (x)+ ∂ 2 E k ∂y 2 (x) (6) S k (x)= ∑ ( E k c ( x) - E k R ( x) +E k G ( x) +E k B ( x) 3 ) c∈{R,G,B} 2 (7) For a detailed explanation of the dark channel prior, readers can refer to [12]. The image dehazing process of AMEF is shown in Figure 1. 2.3 Performance comparisons of haze removal methods As shown in Table 1, and it provides a comparison between the state-of-the-art haze removal methods and the proposed method. Table 1: Analogy of the state-of-the-art methods and proposed method. Methods Information Findings Cons Galdran. [12] Proposes a method for image dehazing using artificial multi exposure fusion. High efficiency and satisfactory result images. There is color distortion in the results. Nan et al. [20] Uses deep learning methods to process low light and dusty coal mine images. The good visual effect and it's efficient for all sorts of images. 1. Requires a large number of on-site images. 2. Low operating efficiency. 3. Complex algorithm structure. Zhang et al. [21] Provides a method based on atmospheric scattering model and DCP to enhance mine monitoring images. Generates satisfactory and neat photos at average speed. 1. The result is too dark. 2. Contains artifacts. Wang et al. [22] Transform-based low illumination mine images haze removal. Effective noise reduction and edge protection. 1. Inapplicable for massive turbidity gradients. 2. Has oversaturation/undersaturation problems. Proposed method Adopts a multi-scale fusion strategy to remove haze from coal mine images, and combines image enhancement and filtering techniques to enhance contrast and image details. Applications and advantages The proposed method is suitable for coal mine images with low visibility and dusty influence and is more capable of balancing haze removal and visualization requirements compared to other methods. It has higher efficiency and simple operation steps. 54 Informatica 47 (2023) 51–62 C. Su et al. 3 Proposed methods 3.1 Image preprocessing based on improved homomorphic filtering algorithm The primary source of illumination is artificial in coal mines, and the uneven distribution of illumination can result in poor overall image visibility. To address this issue, the homomorphic filtering [23] algorithm can be applied to underground images to reduce the influence of illumination on the image. Figure 2: The flow chart of homomorphic filtering. To convert a nonlinear signal into a linear model, a common technique is to apply logarithmic transformation (Ln) to the input image f(x, y) and then use Fourier transform (FFT) to transform the image from the spatial domain to the frequency domain. Next, the transfer function H(u, v) of the filter is used to attenuate the low- frequency components and enhance the high-frequency components. The resulting image is then converted back to the spatial domain using a fast Fourier transform (FFT - 1 ) after filtering. Finally, the enhanced image g(x, y) is obtained through exponential transformation (exp). The flowchart for this process is illustrated in Figure 2. The foggy image f(x, y) can be mathematically represented as the product of its illuminance component L(x, y) and reflection component R(x, y) [24]. This relationship can be expressed as Equation (8): F(x, y)=L(x, y) ⋅R(x, y) (8) The fundamental principle of homomorphic filtering lies in considering the gray value of an image as the product of two components: illuminance L(x, y) and reflectance R(x, y). By processing the impact of illuminance L(x, y) and reflectance R(x, y) on the gray image value, the objective of exposing the details of shadow areas can be achieved. The flowchart for this process is illustrated in Figure 2. In order to address the impact of fog and dust in underground coal mine images, this paper employs the exponential function as the transfer function for a homomorphic filter. This approach can effectively filter out the influence of fog and dust on the image. The conventional exponential transfer function is represented by Equation (9). T(u, v)=e ( -D 0 D(u, v) ) n (9) The optimized exponential transfer function is: T(u, v)=(r H -r L )e ( -cD 0 D(u, v) 2n ) +r L (10) The distance between the frequency (u, v) and the center frequency (u 0, v 0) is denoted by D(u, v) in the transfer function, where D 0 represents the cutoff frequency, r H represents the high-frequency gain, r L represents the low-frequency gain, and c is the sharpening coefficient. When 01, the filter reduces the low-frequency and enhances the high-frequency, thereby achieving gray dynamic range compression and contrast enhancement simultaneously. The image processing effect of different parameters is depicted in Figure 3. In Figure3, the value of r L is observed to be a crucial factor in determining the low-frequency domain, which directly affects image contrast and brightness enhancement. To cater to the specific environmental characteristics of underground coal mine images, this paper sets the filter parameters as follows: r L=0.3, r H=1.5, D 0=4, and c=3. Figure 3: Comparison of GIF and GDGIF. 3.2 Image enhancement based on new S- type curve enhancement function A luminance adjustment is necessary to equalize the brightness components of the underground coal mine image as it contains both bright and dark areas. The adjustment involves enhancing the pixel value of the low- brightness areas and suppressing the pixel value of the high-brightness areas. Therefore, this paper combines an S-type curve function [2] with CLAHE algorithm to adjust the luminance component. The S-type curve function is appropriately revised to adapt to the complex environment of low illumination and fog and dust in underground coal mine images. The expression of the revised S-type curve function is as follows : P(x, y)=1/ (1+a√ 1-F( x, y) F( x, y) +τ ) +ε (16) L(x, y)=max(P(x, y),0) (17) Among them: Clarity Method of Low-illumination and Dusty Coal Mine Images… Informatica 47 (2023) 51–62 55 ε= − 0.1 |F(x, y) − 1 1+a 2 | (18) Here, F(x, y) represents the input luminance component, a is the illuminance adjustment coefficient, τ is constant used to prevent the denominator from being zero, usually set to 1E-12, and ε is the fog suppression coefficient. To obtain a clearer overall image, the value of a is set to 0.9. However, due to the interference of fog and dust in the mine environment, the original function may not achieve the desired effect. Therefore, a suppression coefficient ε is added in this paper. Figure 4: S-type curve function. When the value of a is set to the different parameters, Figure 4 shows the results image of this paper, and the difference of function graph between the improved S-type function and the original S-type function. 3.3 Fine contrast based on gradient domain guided filtering To address the halo artifact issue at the edge of the underground coal mine images, caused by the conventional guided filter, and taking into account the specific features of the coal mine environment, the gradient domain guided filter (GDGIF)[25] is used instead of the Laplace filter. GDGIF provides a better edge- preserving performance when processing the underground exposure image, which allows obtaining the fine contrast function. In gradient-domain guided filtering, a filtering window centered on pixel k, ω k, is assumed. A local linear model is established between the guided image I i and the filter output q i, expressed as: q i =a k I i +b k ∀i ∈ ω 𝑘 (19) To determine the linear coefficients a k and b k that minimize the difference between the filter output q i and the filter input p i in a filtering window, a minimization energy equation is defined as follows in gradient domain guided filtering. E(a k ,b k )= ∑ ((a k I i +b k -p i ) 2 + ε Γ G(k) ( a k -γ k ) 2 ) i∈ωk (20) Among them: Γ G(k) = 1 N ∑ χ k +λ χ i +λ N i=1 (21) γ k =1- 1 1+e η( 𝜒 𝑘 -𝜇 χ,∞ ) (22) In gradient-domain guided filtering, a minimization energy equation is defined to determine the linear coefficients a k and b k that minimize the difference between the filter output qi and the filter input p i in a filtering window. This equation includes several parameters, such as ε as the regularization parameter, as the edge sensing weight coefficient, which is defined by the local variance in the filtering window, λ as a constant, and the size as (0.001×L) 2 , where L represents the dynamic range of the input image. Additionally, χ k is defined σ G,1(k) σ G,ξ 1(k) , where ξ 1 is the window size of the filter, is the edge sensing constraint term, and 𝜇 𝜒 ,∞ is the mean value of χ i. The value of η is calculated as 4/(μ 𝜒 ,∞ -min(χ i )). When the pixel k is at the edge, γ k is close to 1, and when the pixel k is in a flat region, γ k is close to 0. The solution of the coefficient can be determined using a linear regression equation. a k = 1 |ω| ∑ I i P i -μ k P ̅ k + ( ε Γ G(k) )γ k i,k ∈ω k σ k 2 + ε Γ G(k) (23) b k =p k -a k μ k (24) Where μ k and σ k 2 is mean and variance of I in the ω k, Figure 5: Comparison of GIF and GDGIF. |ω| is window pixels in the ω k, P k = ∑ P i i∈ω k /|ω| is the average of P in the ω k. Figure 5 shows the different processing effects of the gradient domain guide filter and the guide filter when the input image and the guide image are the same and the filtering parameters are: r=16, ε=0.04. In this paper, the contrast function H(x) is composed of I k γ (x) initial contrast function of ϕ(x) . For gradient domain guided filtering: Original image a=0.1 a=0.5 a=0.9 GIF Original image GDGIF 56 Informatica 47 (2023) 51–62 C. Su et al. H(x)=ϕ k (x) ⊕G(K, r) (25) Where ⊕G(k, r) refers to using gradient domain guidance filtering on E k i ( x), K refers to the guidance image, and r refers to the filtering size. In this paper, the gray image of the original exposure image is used as K, and the filter size is: r= | 1 6 max(h, w)-1| (26) Where, ⌈ ⌉ represents the rounding operation, h and w represent the height and width of the corresponding image. The larger the size of the filter, the more detailed information is contained in the contrast component, and the operation complexity and time of the filter have no relationship with the filter size. Filtered image Φ(x) is: Φ(x)=I max Ω (x)-I min Ω (x) (27) Where 𝛺 is the size of the given region, I Ω max(x)= max {I(x)|x ∈Ω}, I Ω min (x) =min{I(x)|x ∈Ω}. 3.4 The specific implementation process of proposed algorithm The algorithm flow chart is shown in Figure 6. Figure 6: The flow chart of the proposed algorithm. Firstly, this paper introduces the contrast-enhanced image into the artificial expo-sure image set. Unlike the original method, which applies the CLAHE algorithm directly to the original image, this paper converts the input image into HSV color space and uses a combination of CLAHE and a new S-type curve function to correct the value component in HSV color space and then converts the corrected image to RGB color space. Next, a group of available exposure images is obtained from the multiple exposure image set, and the corresponding weight is calculated before artificially fusing and defogging. In this paper, the gradient domain guided filter is used to obtain the expo-sure image contrast, which is combined with the saturation obtained from the standard deviation of each color channel to obtain the weight. Finally, an improved homomorphic filter is introduced into the Laplacian pyramid that decomposes the available images and is fused step by step with the weight map obtained by using the Gaussian pyramid decomposition to obtain the final output clear coal mine image. 4 Experimental results This section presents an evaluation of the proposed method using both subjective visual perception and objective evaluation indicators. All the test images were obtained from Flickr and [26, 27]. All experiments were conducted on a laptop with a 2.30GHz CPU and 8GB RAM using MATLAB R2017A. In order to evaluating the effectiveness of the proposed method, several state-of-the-art technologies were used for comparison. Figure 7 to Figure 9 depict the experimental results. Specifically, Figure 7 illustrates the contrast enhancement and equalization of brightness components in the HSV color space, compared with the enhanced image by the original CLAHE algorithm. Meanwhile, Figure 8 and Figure 9 display the processing results of the proposed method and other methods such as Shin et al. [28], Li et al. [29], Zhu et al. [30], Galdran [12], and Ehsan et al. [11]. To ensure a fair comparison, the data of the state-of-the-art methods were obtained from the public codes of their respective authors. 4.1 Subjective evaluation In Figure 7, the rectangular boxes highlight the hydraulic support and coal layer areas that exhibit color distortion and contrast over-enhancement issues. It is observed that direct application of CLAHE to the original image may lead to such problems in certain regions of the image. On the other hand, the proposed method employs a con- version of the original image to the HSV color space for adjustment, which can effectively prevent such problems. Figure 8 displays the results of various algorithms applied to coal mine images. The results of Shin et al. [28] Input image Gamma Correction Exposure images RGB transform to HSV H S V CLAHE and S-type function Corrected V Enhanced result HSV transform to RGB Multi-exposure image set Homomorp- filter Enhnaced image GDGIF standard deviation Contrast Saturation Weight maps Ouput image Laplacian pyramid Multi-level images Multi-level weights Fusion Gauss pyramid Clarity Method of Low-illumination and Dusty Coal Mine Images… Informatica 47 (2023) 51–62 57 Figure 7: Comparison of image enhancement results between proposed method and CLAHE. Figure 8: Comparison of different haze removal algorithms on coal mine images a-d using state-of-the-art methods and the proposed method. Between image a and b is the regions marked in yellow frames are enlarged. (a) input image; (b) Galdran. [12]; (c) Shin et al. [28]; (d) Li et al. [29]; (e) Ehsan et al. [11]; (f) Zhu et al. [30]; (g) proposed method. and Ehsan et al. [11] show that the overall brightness of the images is lower compared to other algorithms, and the details of the images are poorly preserved. Galdran [12] and Zhu et al. [30] exhibit different issues such as color distortion and excessive contrast enhancement. For instance, in the yellow box marked area of images a and b, Galdran [12] suffers from color distortion at the hydraulic support and grayscale mutation area, while Zhu et al. [30] cause serious color and contrast imbalance in the entire image c. In contrast, Li et al. [29] and the proposed method demonstrate better visual effects, such as better brightness levels and clearer human visual perception details. As shown in Figure 9, this paper not only compares the processing effects of different algorithms for close- range hazy mine images but also processes some nighttime images with similar lighting environments as the coal mine. For the images with large dark areas and fog in the distant view, such as images a and b, Li et al. [29] and Ehsan et al. ’s [11] processing results reveal that the overall brightness of the image is low. Although the fog can be removed, the visual effect of the processed image is deteriorating. For images g, h, i, Zhu et al. [30] will cause serious color distortion and excessive contrast enhancement when processing images whose overall tone tends to be consistent due to light and fog. Shin et al. ’s [25] dehazed images are still hazy. Galdran [12] and the proposed method exhibit more satisfactory results than other algorithms. 4.2 Objective evaluation This paper has utilized six commonly used evaluation metrics for assessing the quality of haze removal in images, namely Peak Signal-to-Noise Ratio (PSNR) [31], Structural Similarity (SSIM) [32], Mean Light Intensity (MLI), Contrast Index (CI), Entropy (E) and Average Gradient (AG). These metrics were chosen to provide a com-prehensive evaluation of the performance of different haze removal algorithms. Figure 10 presents the objective average evaluation results of the various algorithms based on the aforementioned metrics. In addition, Figure 11 illustrates the comparison of the running times and the number of 58 Informatica 47 (2023) 51–62 C. Su et al. processed pixels for each of the six methods used in this paper. Figure 9: Comparison of different haze removal algorithms on nighttime images e-i using state-of-the-art methods and the proposed method. (a) input image; (b) Galdran. [12]; (c) Shin et al. [28]; (d) Li et al. [29]; (e) Ehsan et al. [11]; (f) Zhu et al. [30]; (g) proposed method. According to the results presented in Figure 10, Table 2 summarizes the performance of six evaluation metrics of all experimental images. Galdran [12] and the proposed method achieved the highest scores in the SSIM index, both exceeding 0.8, which indicates that they can effectively retain the content information of the original image during the dehazing process. Ehsan et al. [11] scored lower than other algorithms in the E, MLI, AG and CI indices, suggesting that it may be challenging to meet the dehazing and visualization requirements when processing such images. Shin et al. [28] showed good performance in the E and AG indices, indicating that it can effectively retain the structural information of the original image. Li et al. [29] ranked relatively high in the E index, but its CI and MLI scores were lower, indicating that it has good dehazing performance, but the image quality after dehazing may be poor. Zhu et al. [30] had the lowest scores in both PSNR and SSIM indices, indicating that the processed image is severely distorted and may not meet visual requirements. The proposed method achieved the top ranking in MLI, CI, PSNR, SSIM and E indices, and its results in the AG index was only slightly lower than the top performer, demonstrating that the proposed method can achieve good human visualization performance. As shown in Figure 11, the processing time of Shin et al. [28] and Ehsan et al. [11] algorithms is significantly higher than that of other algorithms. In particular, there is an order-of-magnitude difference between Ehsan et al. [11] algorithm and the other algorithms. The proposed method is second only to Zhu et al. [30] in terms of processing efficiency, and its processing time changes only slightly with increasing image size, indicating its suitability for real-time processing of images. Clarity Method of Low-illumination and Dusty Coal Mine Images… Informatica 47 (2023) 51–62 59 Figure 10: Quantitative analysis results (average) of test image. Table 2: Average six metrics values of test hazy images of different algorithms (the maximum value is marked in bold). Metho d Quantitative metrics E MLI CI AG PSNR SSIM [24] 7.32 0.35 0.36 0.087 20.21 0.75 [25] 6.92 0.26 0.23 0.032 20.12 0.71 [26] 7.12 0.36 0.33 0.065 17.43 0.52 [12] 7.36 0.39 0.37 0.073 22.62 0.83 [11] 6.25 0.22 0.18 0.034 17.84 0.65 Ours 7.49 0.44 0.41 0.083 24.56 0.87 Figure 11: Comparison of processing time of different algorithms on haze images with differing numbers of pixels. 4.3 Discussion This article proposed a haze removal method for coal mine images, which uses a multi exposure fusion method to process the image. Different from previous dehazing methods that rely on physical models and may result in incorrect estimation of transmittance and atmospheric light, the proposed method avoids potential problems with over-darkness and artifacts in the image that may arise from the presence of white objects or light sources. Similarly, the proposed method employed a combination of gradient domain guided filtering and improved Homomorphic filtering, which effectively preserves image details and protects edge information during the dehazing process. In terms of efficiency, pyramid decomposition is introduced to process hazy images, unlike those methods that perform complex iterative optimization of model parameters, which ensures the image integrity during the processing. At the same time, most current haze removal methods for coal mine images struggle to balance the requirements of both haze removal and image visualization in the challenging low-light and dusty environments commonly encountered in coal mine. In this study, the proposed method addressed this issue through the use of an S-type curve function and CLAHE in the HSV color space to enhance image visualization while simultaneously removing haze. 5 Conclusion In this paper, AMEF was first introduced, which does not directly rely on depth-related physical models but rather restores hazy images by fusing the good contrast region of multiple exposure images. However, when an input image is degraded by multiple physical models, such as nighttime haze or water attenuation, AMEF and most advanced algorithms cannot solve the color temperature or low illuminance problems. An improved method for processing fog and dust degradation images in complex underground environments based on AMEF was proposed Index: AG 0.04 0.08 0.06 0.10 0.02 Index: PSNR Index: SSIM 1.0 0.9 0.8 0.7 0.6 0.5 0.4 27.5 25.0 22.5 20.0 17.5 15.0 Shin et al[31] Li et al[32] Zhu et al[33] Galdran[12] Ehsan et al[11] proposed 15.00 17.50 20.00 22.50 25.00 27.50 Index: PSNR Li et al[29] Shin et al[28] Zhu et al[30] Ehsan et al[11] Galdran [12] Proposed Li et al[29] Shin et al[28] Zhu et al[30] Ehsan et al[11] Galdran [12] Proposed Li et al[29] Shin et al[28] Zhu et al[30] Ehsan et al[11] Galdran [12] Proposed Shin et al[31] Li et al[32] Zhu et al[33] Galdran[12] Ehsan et al[11] proposed 0.4 0.5 0.6 0.7 0.8 0.9 1.0 Index: SSIM Shin et al[31] Li et al[32] Zhu et al[33] Galdran[12] Ehsan et al[11] proposed 0.02 0.04 0.06 0.08 0.10 Index: AG Index: CI 0.44 0.38 0.32 0.25 0.19 0.13 Index: MLI Index: E Li et al[29] Shin et al[28] Zhu et al[30] Ehsan et al[11] Galdran [12] Proposed 0.50 0.45 0.40 0.35 0.30 0.25 0.20 5.98 7.36 6.90 6.44 7.82 Li et al[29] Shin et al[28] Zhu et al[30] Ehsan et al[11] Galdran [12] Proposed Li et al[29] Shin et al[28] Zhu et al[30] Ehsan et al[11] Galdran [12] Proposed Shin et al[31] Li et al[32] Zhu et al[33] Galdran[12] Ehsan et al[11] proposed 0.13 0.19 0.25 0.32 0.38 0.44 Index: CI Shin et al[31] Li et al[32] Zhu et al[33] Galdran[12] Ehsan et al[11] proposed 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55 Index: MLI 0.15 Shin et al[31] Li et al[32] Zhu et al[33] Galdran[12]Ehsan et al[11] proposed 5.98 6.44 6.90 7.36 7.82 Index: E 200*200 200*400 400*400 400*600 600*800 0 5 10 15 20 25 30 35 Time/s Pixel number Shin et al[31] Li et al[32] Zhu et al[33] Galdran[12] Ehsan et al[11] proposed Pixel number 0.8E+05 1.8E+05 2.8E+05 3.8E+05 4.8E+05 Time/s 30 25 20 15 10 5 0 35 60 Informatica 47 (2023) 51–62 C. Su et al. in this paper. Firstly, a new adjustment function was introduced into the HSV color space to enhance the exposed image for correcting the color distortion. Secondly, new weight maps were built through gradient domain guided filter to retain the image details as much as possible. Lastly, in aspect of contrast enhancement, the improved homomorphic filtering algorithm was introduced into the Laplace pyramid. According to the experimental results, the proposed method was shown to be superior to other advanced methods in terms of efficient and effective image dehazing. There are still some remaining issues worth investigating in future work, including a more in-depth analysis of model parameters and how to apply the proposed method to real-time processing of video images. Acknowledgements This research was supported by the Independent Research Fund of the State Key Laboratory of Mining Response and Disaster Prevention and Control in Deep Coal Mines (No. SKLMRDPC20ZZ06) and the program in the Youth Elite Support Plan in Universities of Anhui Province (No. gxyq2020013) and Innovation fund of Anhui University of Science and Technology (2021CX2054). References [1] Shi, Z. H.; Feng, Y. N.; Zhao, M. H.; Zhang, E. H.; He, L. F. Normalised gamma transformation-based contrast-limited adaptive histogram equalisation with colour correction for sand-dust image enhancement. Iet Image Process, 2020, 14(4): 747-756. https:// doi.org/10.1049/iet-ipr.2019.0992 [2] Zhi, N.; Mao, S; Li, M. Enhancement algorithm based on illumination adjustment for non-uniform illuminance video images in coal mine. Journal of China Coal Society, 2017, 42(08): 2190-2197. https:// doi.org/10.13225/j.cnki.jccs.2017.0048.html [3] Zhi, N.; Mao, S; Li, M. An enhancement algorithm for coal mine low illumination images based on bi-gamma function. Journal of Liaoning Technical University(Natural Science) 2018, 37(01): 191-197. https://doi.org/10.11956/j.issn.1008- 0562.2018.01.034 [4] Kaiming H, Jian S and Xiaoou T. Single image haze removal using dark channel prior. In 2009 IEEE Conference on Computer Vision and Pattern Recognition June 20-25; Miami, FL, 2009; pp. 1956- 1963. https://doi.org/ 10.1109/CVPR.2009.5206515 [5] Qin, H; Li, Y; Long, W; Zhao, R. Real-time video dehazing using guided filtering and transmissivity estimated based on dark channel prior theory. Journal of Zhejiang University(Engineering Science) 2018, 52(07): 1302-1309. https://doi.org/10.3785/j.issn.1008-973X.2018.07.010 [6] He, K. M.; Sun, J. A.; Tang, X. O. Guided Image Filtering, 11th European Conference on Computer Vision, Heraklion, GREECE, Sep 05-11; Heraklion, GREECE, 2010; pp. 1-+. https://doi.org/10.1007/978-3-642-15549-9_1 [7] Meng, G.; Wang, Y.; Duan, J.; Xiang, S.; Pan, C. Efficient Image Dehazing with Boundary Constraint and Contextual Regularization, In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Sydney, AUSTRALIA, 01-08 December 2013; pp. 617-624. https://doi.org/ 10.1109/iccv.2013.82 [8] Zhu, Q.; Mai, J.; Shao, L. A Fast Single Image Haze Removal Algorithm Using Color Attenuation Prior. IEEE Trans Image Process 2015, 24(11): 3522-33. https://doi.org/ 10.1109/TIP.2015.2446191. [9] Berman, D.; Treibitz, T.; Avidan, S. Non-local Image Dehazing. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 9 December 2016; pp. 1674- 1682. https://doi.org/ 10.1109/cvpr.2016.185. [10] Berman, D.; Treibitz, T.; Avidan, S. Air-Light Estimation Using Haze-Lines, In Proceedings of the IEEE International Conference on Computational Photography (ICCP), Stanford Univ, Stanford, CA, 12-14 May 2017; pp. 115-123. https://doi.org/ 10.1109/ICCPHOT.2017.7951489 [11] Ehsan, S. M.; Imran, M.; Ullah, A.; Elbasi, E. A Single Image Dehazing Technique Using the Dual Transmission Maps Strategy and Gradient-Domain Guided Image Filtering. IEEE Access 2021, 9: 89055-89063. https://doi.org/ 10.1109/access.2021.3090078 [12] Galdran, A., Image dehazing by artificial multiple- exposure image fusion. Signal Process 2018, 149: 135-147. https://doi.org/ 10.1016/j.sigpro.2018.03.008 [13] Yang, M.; Liu, J.; Li, Z. Superpixel-Based Single Nighttime Image Haze Removal. IEEE Transactions on Multimedia 2018, 20(11): 3008-3018. https://doi.org/ 10.1109/tmm.2018.2820327 [14] Li, S. Y.; Lin, J.; Yang, X.; Ma, J.; Chen, Y. F. BPFD-Net: enhanced dehazing model based on Pix2pix framework for single image. Machine Vision and Applications 2021, 32(6): 1-13. https://doi.org/ 10.1007/s00138-021-01248-9 [15] Ullah, H.; Muhammad, K.; Irfan, M.; Anwar, S.; Sajjad, M.; Imran, A. S.; de Albuquerque, V. H. C. Light-DehazeNet: A Novel Lightweight CNN Architecture for Single Image Dehazing. IEEE Trans Image Process 2021, 30: 8968-8982. https://doi.org/ 10.1109/TIP.2021.3116790 [16] Huang, S. C.; Cheng, F. C.; Chiu, Y. S. Efficient contrast enhancement using adaptive gamma correction with weighting distribution. IEEE Trans Image Process 2013, 22(3): 1032-41. https://doi.org/ 10.1109/TIP.2012.2226047 [17] Zuiderveld, K. VIII.5. - Contrast Limited Adaptive Histogram Equalization. In Proceedings of the Clarity Method of Low-illumination and Dusty Coal Mine Images… Informatica 47 (2023) 51–62 61 Graphics Gems, Heckbert, P. S., Ed. Academic Press: 1994; pp. 474-485. https://doi.org/10.1016/B978-0-12-336156- 1.50061-6. [18] Xu, G.; Aminu, M. J., An Efficient Procedure for Removing Salt and Pepper Noise in Images. Informatica 2022, 46(2). https://doi.org/ 10.31449/inf.v46i2.3530 [19] Shi, Z.; Feng, Y.; Zhao, M.; Zhang, E.; He, L. Let You See in Sand Dust Weather: A Method Based on Halo-Reduced Dark Channel Prior Dehazing for Sand-Dust Image Enhancement. IEEE Access 2019, 7: 116722-116733. https://doi.org/ 10.1109/access.2019.2936444 [20] Nan, Z.; Gong, Y., An Image Enhancement Method in Coal Mine Underground Based on Deep Retinex Network and Fusion Strategy. In 2021 6th International Conference on Image, Vision and Computing (ICIVC), 2021; pp 209-214. https://doi.org/ 10.1109/icivc52351.2021.9526933. [21] Zhang, W.; Zuo, D.; Wang, C.; Sun, B., Research on image enhancement algorithm for the monitoring system in coal mine hoist. Measurement and Control 2023. https://doi.org/ 10.1177/00202940231173767 [22] Wang, M.; Tian, Z., Mine image enhancement algorithm based on nonsubsampled contourlet transform. Journal of China Coal Society, 2020, 45(9): 3351-3362. https://doi.org/ 10.13225/j.cnki.jccs.2019.0798 [23] Fan, Y.; Zhang, L.; Guo, H.; Hao, H.; Qian, K. Image Processing for Laser Imaging Using Adaptive Homomorphic Filtering and Total Variation. Photonics 2020, 7(2): 30-45. https://doi.org/ 10.3390/photonics7020030 [24] Jobson, D. J.; Rahman, Z.; Woodell, G. A. A multiscale retinex for bridging the gap between color images and the human observation of scenes. IEEE Transactions on Image Processing 1997, 6(7): 965- 976. https://doi.org/ 10.1109/83.597272 [25] Kou, F.; Chen, W.; Wen, C.; Li, Z. Gradient Domain Guided Image Filtering. IEEE Trans Image Process 2015, 24(11): 4528-39. https://doi.org/ 10.1109/TIP.2015.2468183. [26] Mao, Q.; Wang, Y.; Zhang, X.; Zhao, X.; Zhang, G.; Mushayi, K. Clarity method of fog and dust image in fully mechanized mining face. Machine Vision and Applications 2022, 33(2): 30-45. https://doi.org/ 10.1007/s00138-022-01282-1 [27] Yu, T.; Song, K.; Miao, P.; Yang, G.; Yang, H.; Chen, C. Nighttime Single Image Dehazing via Pixel-Wise Alpha Blending. IEEE Access 2019, 7: 114619- 114630. https://doi.org/ 10.1109/access.2019.2936049 [28] Shin, J.; Kim, M.; Paik, J.; Lee, S. Radiance– Reflectance Combined Optimization and Structure- Guided $\ell _0$-Norm for Single Image Dehazing. IEEE Transactions on Multimedia 2020, 22(1): 30- 44. https://doi.org/ 10.1109/tmm.2019.2922127 [29] Li, Z.; Shu, H.; Zheng, C. Multi-Scale Single Image Dehazing Using Laplacian and Gaussian Pyramids. IEEE Trans Image Process 2021, 30: 9270-9279. https://doi.org/ 10.1109/TIP.2021.3123551 [30] Zhu, Z.; Wei, H.; Hu, G.; Li, Y.; Qi, G.; Mazur, N. A Novel Fast Single Image Dehazing Algorithm Based on Artificial Multiexposure Image Fusion. IEEE Transactions on Instrumentation and Measurement 2021, 70, 1-23. https://doi.org/ 10.1109/tim.2020.3024335 [31] Huang, Y.; Niu, B.; Guan, H.; Zhang, S. Enhancing Image Watermarking With Adaptive Embedding Parameter and PSNR Guarantee. IEEE Transactions on Multimedia 2019, 21(10), 2447-2460. https://doi.org/ 10.1109/tmm.2019.2907475 [32] Tang, Y. M.; Ren, F. J.; Pedrycz, W. Fuzzy C-Means clustering through SSIM and patch for image segmentation. APPLIED SOFT COMPUTING 2020, 87: 105928-105943. https://doi.org/ 10.1016/j.asoc.2019.105928 62 Informatica 47 (2023) 51–62 C. Su et al.