SlideShare a Scribd company logo
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 07 Issue: 02 | Feb 2020 www.irjet.net p-ISSN: 2395-0072
© 2020, IRJET | Impact Factor value: 7.34 | ISO 9001:2008 Certified Journal | Page 998
Dehazing Of Single Nighttime Haze Image using Superpixel Method
A. Susmi Raj1, R. Soundararajan2
1PG Scholar, Dept. of Computer Science and Engineering, SVS College of Engineering Coimbatore, Tamilnadu, India
2Assistant Professor, Dept. of Computer Science and Engineering, SVS College of Engineering
Coimbatore, Tamilnadu, India
---------------------------------------------------------------------***----------------------------------------------------------------------
Abstract - The open air picturescapturedinharshclimate are
corrupted due to the nearness of fog, mist, rain and so on.
Pictures of scenes captured in lousy climate have destitute
contrasts and colors. This might also cause problem in
spotting the objects within the captured murky images. Dueto
murkiness there may be trouble to numerous computer vision
application because it diminishes the perceivability of the
scene. Picture de-hazing is one of the essential imperative
inspect variety in picture preparing. Cloudinessisgenuinely an
Climatic effect.
In this paper a novel super-pixel based single image haze
removal algorithm with a neural network method is proposed
for nighttime haze image. The input nighttime haze image is
first decomposed into a glow image and glow-loose nighttime
haze picture the usage of their relative smoothness. A super
pixel based approach is then brought to compute the price of
atmospheric light and dark channel for each pixel in the glow
free haze image. The transmission map is decomposed from
the darkish channel of the glow-unfastened haze photo by
means of the weighted guided photograph filter. Since super-
pixels normally adhere to the limits of gadgets well, a smaller
nearby window size may be selected. In addition to this the
gathered images (the hazy images) will input to a neural
network that will increase the quality oftheoutput imagethat
the actual image we want to get.
Key words: Nighttime image haze removal, Glow
decomposition, Morphological artifacts, Super-pixel
segmentation, Weighted guided image filtering
1. INTRODUCTION
In real life, natural phenomena (inclusive of rain, haze,snow
etc)can lead to degradation of outdoor images. Because in
these environments, massive amount of particle floating in
the atmosphere, mild propagation within the atmosphere
will be suffering from these floating particles.Inthesector of
computer vision image de-hazing has a much broader
demand. From the perspective of image processing, it may
provide pretreatment for some visual algorithms. Besides
from the practical factor of view, it plays an important role
in military system and civil system [1]. Haze free images
makes the scene look more practical and provide more
useful information. Thereforethestudiesof imagede-hazing
has critical practical importance and developmentprospect.
There were many haze removal techniques which are
primarily based on the optical model developed fordaytime
haze images. At present, the most widely bodily model is a
linear equation such as transmission and atmospheric light.
According to the version, day time de-hazingtechniquewant
to estimate the atmospheric mild and transmission map.
Many classic strategies use additional information or
multiple photographs to do away with haze from the haze
snapshot. For example Schechner et al. [2] proposed a novel
technique of using images with different polarizer
orientation to de-hazing. Lai et al. [3] proposed an
interesting technique to derive the most beneficial
transmission map at once from the sunlight hours haze
image version. In [4], a quick algorithm for single image de-
hazing is proposed based totally on linear transformation
with the aid of assuming that a linear relationship existence
inside the minimal channel between the hazy image and the
haze-free image. On top of dark channel prior, He et al.
introduced a simple method to study single image haze
elimination in [5]. The dark channel prior is based on a
commentary that most nearby patches in haze-free scene
images include a few pixels that have very low intensities in
at the least one coloration channel. Many dark channel prior
based totally haze removal algorithms were delivered since
then [7][8]. The dark channel earlier based totally methods
usually work well, but the dark channel earlier has
limitations. For example,morphological artifactsareanissue
for the darkish channel previous while the preliminary
transmission map is computed use of the dark channel
earlier [5]. A simple edge retaining decomposition based
totally framework changed into proposed for single image
haze removal in [9]. Simplified dark channel of the haze
image is decomposed into a base layer and a element layer
via the weighted guided image filter (WGIF) in [11], and the
transmission map is expected from the base layer. Image
enhancement based techniques alsoareappliedtotheimage
de-hazing, for example [12][13][14].
Even though these techniques generally carry out well for
daylight hours haze image shots, they are now not properly
ready to correct nighttime scenes due to varying imaging
conditions such as active light assets or glow effects. Due to
existence of active light assets, for example, avenue lighting
et al, and their related glow, the version of daytime haze
images are now not applicable to the midnight haze image.
Recently, several interesting methods have beenemergedto
decorate nighttime haze images [15][16][17][20]. Pei and
Lee [15] introduced a method primarily based the color
transfer processing. The colorings of a nighttime hazeimage
had been mapped to those of a daytime haze image. Then, a
dark channel prior primarily based set of rules was supplied
to estimate the transmission map. A post-processing step
become also furnished to improvetheinsufficient brightness
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 07 Issue: 02 | Feb 2020 www.irjet.net p-ISSN: 2395-0072
© 2020, IRJET | Impact Factor value: 7.34 | ISO 9001:2008 Certified Journal | Page 999
and coffee overall evaluation of haze-free image. Due to the
coloration transfer, even though their approach has the
dependable de-hazing quality, the shade of the complete
haze free image looks unnatural. Zhang et al [16] proposeda
new photo model which accounted for various illumination.
First, the mild intensity became estimated and enhanced to
achieve an illumination balanced result, then, they expected
the shade traits of the incident light, finally, they used the
darkish channel previous to cast off the haze.
The main contribution of this paper is a new type of haze
elimination algorithm using the concept of super-pixel.
Compared with the patch based totally methods, the super-
pixel can be used to reduce morphologic artifacts because of
the patch. This is due to the fact that the super-pixel can
adhere to boundaries of items within thehazeimage well. As
such, the radius of the WGIF may be reduced. Subsequently,
more exceptional details may be preserved. In addition, the
proposed super-pixel based totally technique has more
chance to correctly estimate the transmission maps for all
pixels I white gadgets nearby a digital camera than
algorithms in [9][17] and [25]. In other words, the proposed
method provides a technique to a challenging problem on
the estimation of transmissionmapsforpixelsin whiteitems
close by the digital camera. The rest of this paper is
organized as preliminaryknowledge,procedureofnighttime
single image haze removal, experimental results and
conclusion.
2. PRELIMINARY KNOWLEDGE
Since the WGIF and the SLIC might be implemented in the
proposed method, the relevant expertise on them are
summarized in this section.
2.1. Weighted Guided Image Filter
Denote the guidance photograph as I, the filtering output as
Z, and the input photograph as X. The pictures I and X can
be identical. Z is thought to be a linear remodel of I in a
window Ωk focused at the pixel p, and the radius of the
window Ωk is r
Z(p) = ak I(p)+bk,∀p ∈ Ωk, (1)
Wherein the price of (ak,bk) can be computed by means of
minimizing the following cost function in the window Ωk.
E(ak,bk) = ∑ p∈Ωk ((ak I(p)+bk – Xp)2 +λak
2 ), (2)
And λ is a regularization parameterpenalizinglargeak.Since,
halo artifacts may appear on some edges, an edge-
conscious weighting is introduced and incorporated into the
GIF to form the WGIF. The edge-aware weighting ΓI(k) is :
E(ak,bk) = ∑ p∈Ωk ((ak I(p)+bk − X(p))2
+ λ ΓI(k) a 2 k ), (3)
The solutions of ak and bk are given as:
ak = µI ⊙X,r (k)−µI,r(k)µX,r(k) σ 2 I,r (k)
+ λ ΓI (k) , (4)
bk = µX,r(k)−akµI,r(k) (5)
2.2 Super-pixel Segmentation Via The SLIC
A super-pixel is a small region composed by a sequence of
pixels with adjacent positions, similar shade, similar
brightness, similar texture and different characteristics.The
SLIC (Simple Linear Iterative Clustering) is a common
approach of the super pixel segmentation. This method can
phase pixels quick and simply, besides, it may perceive
barriers better. The color images are converted into a five-
dimensional feature vectors V = [l,a,b,x,y] in CIELAB color
area and XY coordinate. Where, [l,a,b] suggests the color of
pixel, [x,y] shows the placement of pixel. It is a distance
measurement standard for five-dimensional characteristic
vectors, and a neighborhood clustering procedure for photo
pixels. SLIC set of rules can generate compact and about
uniform super-pixels. Besides, t hasa highoverall evaluation
in terms of speed, contour preserving and hyperpixel shape,
and the segmentation impact is in keeping with people’s
expectation. The metric of SLIC is given as:
dlab = √ (l(p)−l(p ′ ))2 +(a(p)−a(p ′ ))2
+(b(p)−b(p ′ ))2 (6)
dxy = √ (x(p)− x(p ′ ))2 +(y(p)−y(p ′ ))2 (7)
Ds = √ ( dlab Nlab )2 +( dxy Nxy )2 (8)
where, dlab represents color distance; dxy represents spatial
distance; p and p ′ are two pixels; Nxy = √ K N , where K is
called the total number of pixels, N is the number of the
super pixels; Nlab varies with special images, however it is
normally a set value. The super-pixels normally can pick out
obstacles better. So, it is able to be expected that the concept
of super-pixel may be used to reduce the morphological
artifacts within the single image haze removal.
3. NIGHTTIME SINGLE IMAGE HAZE REMOVAL
In this section, first provide information on the model of
nighttime haze images. Based on this version, the glow is
eliminated from the input photograph therebytheglow-free
haze image is obtained. The model of the glow-loose haze
photo is similar with the version of sunlight hours haze
picture except that the atmospheric light is spatially varying
inside the nighttime haze photograph. It is thus predicted
that existing dark channel based algorithm for sunlight
hours haze removal may be prolonged to address nighttime
haze image. Based on this observation, a super-pixel based
haze elimination algorithm is offered for the glow-free
nighttime haze images.
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 07 Issue: 02 | Feb 2020 www.irjet.net p-ISSN: 2395-0072
© 2020, IRJET | Impact Factor value: 7.34 | ISO 9001:2008 Certified Journal | Page 1000
3.1 Modelling Of A Nighttime Haze Image
Since the light source best from the sun, the sunlight hours
images are not stricken by otherlightsources. Inthedaylight
hours haze image de-hazing algorithm, the model is
generally used as;
Xc(p) = Zc(p)t(p) + Ac(1−t(p)) (9)
When a photo is captured at night, light sources especially
come from road lighting fixtures and car lighting etc. These
lights aren’t international uniform, thus; the glow is present
inside the image. The glow is an atmospheric point spread
function (APSF). Inspired by this, the complete nighttime
haze scenes by adding the glow version into the daylight
haze picture model:
Xc(p) = Zc(p)t(p)+ Ac(1−t(p))+ Aa ∗ APSF (10)
In which c∈{r,g,b} represents coloration channel index, Xc
shows discovered nighttime images. Zc represents the scene
radiance vector. T is the transmission map and indicates the
elements of light that penetrates via the haze. Ac represents
atmospheric light which is not always globally uniform any
longer. Aa indicates active light sources, that the intensity is
convolved with APSF. So that it will get the scene radiance
vector, this is, the haze free image Z, the glow (that is
Aa*APSF) are decomposed from the input photograph
(Xc(p)), and the atmospheric light Ac and the transmission
map t are estimated.
3.2 Glow Removal from the Input Image
From the unprocessed images, we can see that glow can
reduce the visibility of the photograph or may bemakea few
objects unseen. In order to attain an excellent visual image,
the glow should be eliminated from the input photograph.
For simplicity, the equation (10) can be written as:
Xc(p) = Jc(p)+Gc(p) (11)
Where, Jc(p) = Zc(p)t(p) + Ac(1-t(p)), we name it a glow free
nighttime haze image, and Gc(p) = Aa* APSF, it’s far a glow
photograph. From the equation (12), the glow elimination
can be seemed as a layer separation problem. The objective
function for glow layer separation may be described as
follow:
E(J) = ∑ p (ρ(J(p) ∗ f1,2) + λ1((X(p)− J(p)) ∗ f3)2)
s.t.0 ≤ J(p) ≤ X(p) (12)
∑Jr(p) = ∑ p Jg(p) = ∑ p Jb(p)
Where, f12 is the 2-route first spinoff filters. f3 is the second
order Laplacian filter out and the operator ∗ denotes
convolution. ρ(µ) = min(µ2,τ) is a robust characteristic
which preserves large gradients of the input photo X within
the closing nighttime haze layer [17]. For simplicity, we
outline FjL = L ∗ fi then the objective function can be
rewritten as;
E(J) = ∑ p (ρ(F1,2 J(p)) + λ1(F3 J(p)− F3 X(p))2 ) (13)
According to the method [23], with a view to pass the FjL
time period outdoor the ρ(·) function, a weightβisdelivered
to the goal function, the brandnewobjectivefunctionmaybe
written as;
E(J) = ∑ p (β(F1,2 J(p)−g1,2)2 +ρ(g1,2)
+ λ1(F3 J(p)− F3 X(p))2 ) (14)
(a) (b) (c)
(d) (e) (f)
Fig.1: (a,d) two nighttime haze pictures; (b,e) glow pix
with the aid of the method[23]; (c,f) glow photographs by
the proposed algorithm. The glow pictures produced with
the aid of the proposed set of rules are smoother.
It can be validated that the above solution is an
approximated solution at the same time as the provided
solution within the equation (16) is an precise one.
Therefore, our output is more correct than the output in
[23]. The Fig.1 indicates separated glow picture with the aid
of the algorithm in [23] and the proposed set of rules.
Clearly, the glow’s element in (c,f) is greater meticulous and
non-stop than (b,e). Therefore, the proposed glow
decomposition method is higher than the technique in
[23].Since the input image consists of a glow picture and a
glow-loose haze photo the procedureofglowdecomposition
directly impacts the compositionofglow-freeimage,thereby
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 07 Issue: 02 | Feb 2020 www.irjet.net p-ISSN: 2395-0072
© 2020, IRJET | Impact Factor value: 7.34 | ISO 9001:2008 Certified Journal | Page 1001
(a) (b) (c) (d) (e)
(f) (g) (h) (i) (j)
Fig. 2: (a,f) two haze images; (b,g) glow images by the method [23]; (c,h) glow-free haze images by the method [23]; (d,i)
glow images by the proposed algorithm; (e,j) glow-free haze images by the proposed algorithm. There are more details in
the glow-free haze images by the proposed algorithm
affecting the high-quality of de-hazed picture. The Fig.2
shows the glow images and glow-loose images by means of
the Li’s algorithm and the proposed algorithm. It may be
visible from the Fig.2 (c)(e) that the glow may be removed
better through our algorithm, and from the Fig.2 (h)(j) that
the glow unfastened haze photographs are toward the
nature scene.
3.3 Atmospheric Light Estimation
After the glow being removed from the input image, a glow-
free nighttime haze image is finded. In order to repair the
haze-free photo, the atmospheric light Ac and the
transmission map t is need to be estimated.Sincethemodels
of a daytime haze image are almost the same besides for the
atmospheric light, it is able to be expectedthat some existing
method of daytime haze image de-hazing can be carried out
to removal the haze from the glow-free nighttime haze
image. The atmospheric light is always appeared as the
brightest shade within the daylight haze image. However,
due to the presence of the light sources, the atmospheric
light is not always global uniform inside the glow-free
nighttime haze image. According to the photograph
formation model, the picture Zc(p) is a produce of
illumination and reflectance components:
Zc(p) = Ac(p)Rc(p) (15)
where Rc(p) is the reflectance component. The glow-free
nightime haze image is then represented as:
Xc(p) = Ac(p)Rc(p)t(p) + Ac(p)(1−t(p)) (16)
the components Ac(p) and t(p) are assumedto beconstantin
a local window of p. It can then be derived that
max p ′∈Ω(p) {Xc(p ′ )} = Ac(p) max p ′∈Ω(p)
{Rc(p ′ )}t(p)+ Ac(p)(1−t(p)) (17)
Using the following maximum reflectance prior [24],
max p ′∈Ω(p) {Rc(p ′ )} = 1 (18)
it can be derived that
Ac(p) = max p ′∈Ω(p) {Xc(p ′ )}. (19)
(a) (b) (c)
Fig. 3: (a) a glow-free haze image; (b) an initial
atmospheric light image; (c) a refined atmospheric light
image. There are morphological artifacts in the initial
atmospheric light image and they are removed by the
WGIF.
On the premise of the feature ofsuper-pixel,theatmospheric
light more risk to be uniform in each super-pixel. Hence, the
glow free nighttime haze image Jc is decomposed into N
super-pixels in place of a grid of small areas in [17]. The
brightest pixel within the ith super-pixel and it is assigned to
all the pixels inside the super-pixel to make up the
preliminary atmospheric light IAc(p). Even though the
morphological artifacts are reduced the usage of the super-
pixels, there are still visible morphological artifacts. WGIF
can be followed to remove the morphological artifacts. The
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 07 Issue: 02 | Feb 2020 www.irjet.net p-ISSN: 2395-0072
© 2020, IRJET | Impact Factor value: 7.34 | ISO 9001:2008 Certified Journal | Page 1002
WGIF can be followed to remove the morphological artifacts
in order to get the clean global atmospheric light Ac(p).
The photos to be filtered are IAc(p) and the guidance images
are the color components of the glow-free nighttime haze
images Jc(p).
(a) (b) (c)
(d) (e) (f)
Fig.4: (a),(d) are the equal glow-free haze image by way of
the proposed method; (b),(c) respectively is an
atmospheric light image, de-hazed image via the method
[17]; (e),(f) respectively is an atmospheric light image, de-
hazed image by means of the proposed method.
3.4 Transmission Map Estimation
After getting the atmospheric light, the transmission map is
estimated to recover the scene radiance. Here the conceptof
DehazeNet is used to estimate the transmission map.
DehazeNet is an give up-to-end system. It without delay
learns and estimates the mapping relations among hazy
image patches and their medium transmissions. This is
completed through special design of its deep architecture to
encompass establishedimagede-hazingprinciples.Ipropose
a novel nonlinear activation feature in DehazeNet, clled
Bilateral Rectified Linear Unit (BReLU).
BReLU is a singular activationcharacteristic thatisbeneficial
for image recovery and reconstruction. BReLU extends
Rectified Linear Unit(ReLU) and demonstrates its
importance in obtaining accurate image recuperation.
Technically BReLU uses the bilateral Restraint to lessen
search space and enhance convergence. I establish
connections between components of DehazeNet and those
assumptions/priors used in existing dehazing method, and
explain that DehazeNet improves over these strategies by
automatically gaining knowledge of all these additives from
give up to quit. Neural network (DehazeNet) net are
designed to implement four sequential operations for
Medium transmissionestimation,namely, featureextraction,
multi- scale mapping, local extremum, and nonlinear
regression are represented in Fig.5.
1) Feature Extraction: Feature extraction is performed by
the usage of the convolution layer of nueral network. The
input image can be filtered with different kinds of filters. So
we can maintain the vital info and can do away with the
unwanted details. Convolution is performed by the usage of
5 filters of length 3X5.
2) Multi- Scale Mapping: Here we can filter with different
filters having different scaleslike 4X3,4X5,4X7.Then wecan
analyze the details clearly.
3) Local Extremum: When the last two steps are
completed, we get a lot of filtered images. Local extremumis
the process of selecting the most important values from
these filtered images.
4) Nonlinear Regression: Non-linear regression isthelast
step. It is the process of selecting the most important values
and getting the output. When it does, we get a transmission
map. After getting the atmospheric light, the transmission
map is estimated to recover the scene radiance.
3.5 Recovery of the Scene Radiance
The haze-free image Zc(p) can be recovered the use of
equation (20). It is worth nothing that the value of tp is close
to zero, Zc(p)t(p) is also near to zero. In this situation, if the
haze free image is restored, the noise inside the haze free
image will be substantially amplified. Thus a lower sure tm
which was added to constrain tp can lessen the noise. The
value of the tm is decided by way of the denses of thehaze.Its
value is decided on as 0.2 if the haze is not heavy, 0.375
otherwise. The final image is restored as
Zc(p) = Jc(p)− Ac(p) max(t(p),tm) + Ac(p) (20)
It is well worth nothing that the proposed edition
mechanism of tm is coarse and pleasant of de-hazed images
may want to be advanced through a finer choice of tm with
appreciate to special haze levels.
4. Experimental Results
Here I compare proposed algorithm with the daylight hours
haze elimination algorithms in [5] and four nighttime haze
elimination algorithms in [16][17] and [24].
4.1 Comparison Among Different Haze Removal Algorithm
For comparison, the resultstheuseofdehazingalgorithmsin
[5], [16], [17], [20], [24] and proposed algorithm are shown
within the Fig.14. He’s approach in [5] is a classical set of
rules for daytime haze images. It is not alwayssurprisedthat
it is not appropriate to put off the haze from
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 07 Issue: 02 | Feb 2020 www.irjet.net p-ISSN: 2395-0072
© 2020, IRJET | Impact Factor value: 7.34 | ISO 9001:2008 Certified Journal | Page 1003
Fig. 5 The architecture of DehazeNet. DehazeNet conceptually consists of four sequential operations (feature extraction,
multi-scale mapping, local extremum and non-linear regression).
(f) (g) (h) (i) (j)
(k) (l) (m) (n) (o)
Fig.6: (a,f,k) haze images; (b,g,l) dehazed images by the method in [5]; (c,h,m) dehazed images by method [6]; (d,I,n)
dehazed images by the method in [17]; (e,j,o) dehazed images by proposed algorithm.
nighttime haze image. Only part of haze can be removed via
this method and the glow nonetheless exists in the final
pictures. The techniques in [16] and [17] are designed for
images, there are visible halo artifacts and noise is amplified
within the de-hazed snap shots as illustrated in the Fig.6
(h),(m) glow around the mild sources. The method in [16]
was improved in [24].
Neither the algorithm in [20] nor the set of rules in [24]
addressed the glow artifacts. As such, the glow artifacts are
grater visible within the de-hazed images by way of the
algorithms in [20] and [24]. It is well worth nothing that a
bigger tm is selected in the equation (20) to keep away from
amplifying noise in the sky regions. On the other hand,
excellent details can also be smoothedthroughtheproposed
algorithm. It is desired to layout a finer tm for the proposed
set of rules. In spite of our algorithm works nicely in de-
hazing of nighttime haze image, there are still some
deficiencies. For example, the haze removal image appears
darker than the input image. Fortunately, this problem may
be addressed using a single image brightening algorithm in
[26].
The running time of the proposed algorithm is barely longer
because of segment the glow-free image into super-pixels.
The approach in [17] works better than the method [16].
Unfortunately, noise is likewise amplified and halo artifacts
nevertheless exists inside the final images.Thesetofrulesin
[17] money owned that the atmospheric light is locally
regular in a grid of small area. However, this is not
continually true. On the other hand, accordingtocapabilities
of the super-pixels, it’s far more likely that the atmospheric
mild is regular within the super-pixel.
(a) (b) (c) (d) (e)
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 07 Issue: 02 | Feb 2020 www.irjet.net p-ISSN: 2395-0072
© 2020, IRJET | Impact Factor value: 7.34 | ISO 9001:2008 Certified Journal | Page 1004
5. CONCLUSION
This paper presented a novel super-pixel based nighttime
haze image de-hazing set of rules. The proposed set of rules
can eliminate the glow and the haze higher than thestate-of-
the art methods. Local exceptional detailsarealsopreserved
better, and the color distortion and halo artifacts are
glaringly reduced. As such, the haze-loose image is in the
direction of a nature scene. On the alternative hand, because
segmentation of glow-free nighttimehazeimagesintosuper-
pixels is required with the aid of the proposedalgorithm, the
running time and the complexity are increased. It is known
that the existing single image haze elimination algorithms
suffer from amplifying noise inside the sky region. This
problem could be addressed by means of deciding on a finer
adaptive tm inside the equation (20). This problem can be
studied in future research.
REFERENCES
[1] S. G. Narasimhan and S. K. Nayar, “Vision and the
atmosphere,” International Journal onComputerVision,
vol. 48, no. 3, pp. 233-254, Jun. 2002.
[2] Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar,
“Polarization based vision through haze,” Applied
Optics, vol. 42, no. 3, pp. 511-525, 2003.
[3] Y. H. Lai, Y. L. Chen, C. J. Chiou, and C. T. Hsu, “Single-
image dehazing via optimal transmission map under
scene priors,” IEEE Transactions on Circuit andSystems
for Video Technology, vol. 25, no. 1, pp. 1-14, Jan. 2015.
[4] W. C. Wang, X. H. Yuan, X. J. Wu, and Y. L. Liu, “FastImage
Dehazing Method Based on Linear Transformation,”
IEEE Transactions on Multimedia, vol. 19, no. 6, pp.
1142-1155, Jun. 2017.
[5] K. M. He, J. Sun, and X. O. Tang, “Guided image filtering,”
IEEE Transactions on Pattern Analysis and Machine
Intelligence, vol. 35, no.6, pp. 1397-1409, Jun. 2013.
[6] F. Kou, W. H. Chen, C. Y. Wen, and Z. G. Li, “Gradient
Domain Guided Image Filtering,” IEEE Transactions on
Image Processing, vol. 24, no.11, pp. 4528-4539, Nov.
2015
[7] Z. G. Li, J. H. Zheng, W. Yao, and Z. J. Zhu, “Single image
haze Removal via a simplified dark channel,” in 2015
IEEE International Conference on Acoustics,Speechand
Signal Processing, pp. 1608-1612, Apr. 2015.
[8] Q. S. Zhu, D. Wu, Y. Q. Xie, and L. Wang, “Quick shift
segmentation guided single image haze removal
algorithm,” Proceedings of the 2014 IEEE International
Conference on Robotics and Biomimetics, pp. 113- 117,
Dec. 2014
[9] J. Zhang, Y. Cao, and Z. Wang, “Nighttime haze removal
based on a New Imaging Model,” IEEE International
Conference on Image Processing, pp. 4557-4561, Oct.
2014..
[10] C. Ancuti, C. O. Ancuti, C. De Vleeschouwer, and A. C.
Bovik, “Nighttime dehazing by fusion,” IEEE
International Conference on Image Processing, pp.
2256-2260, 2016.
[11] Z. G. Li, J. H. Zheng, Z. J. Zhu, W. Yao, and S. Q. Wu,
“Weighted guided image filtering,” IEEETransactionson
Image Processing, vol. 24, no. 1, pp. 120-129, Jan. 2015.
[12] H. T. Xu, G. T. Zhai, X. L. Wu, and X. K. Yang,
“Generalized Equalization Model for Image
Enhancement,” IEEE Transactions on Multimedia, vol.
16, no. 1, pp. 61-82, Jan. 2014.
[13] K. T. Shih and H. H. Chen, “Exploiting Perceptual
Anchoring for Color Image Enhancement,” IEEE
Transactions on Multimedia, vol. 18, no. 2, pp. 300-310,
Feb. 2016.
[14] F. Kou, Z. Wei, W. H. Chen, X. M. Wu, C. Y. Wen, and Z. G.
Li, “Intelligent detail enhancement for exposurefusion,”
IEEE Transactions on Multimedia, Aug. 2017.
[15] S. C. Pei and T. Y. Lee, “Nighttime haze removal using
color transfer pre-processing and dark channel prior,”
IEEE International Conference on Image Processing,pp.
957-960, Oct. 2012.
[16] J. Zhang, Y. Cao, and Z. Wang, “Nighttime haze removal
based on a New Imaging Model,” IEEE International
Conference on Image Processing, pp. 4557-4561, Oct.
2014.
[17] Y. Li, R. T. Tan, and M. S. Brown, “Nighttime haze
removal with glow and multiple light colors,” IEEE
International Conference on Computer Vision, pp. 226-
234, Dec. 2015
[18] Y. Li and M. S. Brown, “Single image layer separation
using relative smoothness,” IEEE Conference Computer
Vision and Pattern Recognition, 2014.
[19] J. Zhang, Y. Cao, S. Fang, Y. Kang, and C. W. Chen, “Fast
haze removal for nighttime image using the reflectance
prior,” in 2017 Conference on Computer Vision and
Pattern Recognition, Jul. 2017.
[20] K. M. He, J. Sun, and X. O. Tang, “Single image haze
removal usingdark channel prior,”IEEETransactionson
Pattern Recognition and MachineIntelligence,pp.2341-
2353, 2011.
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 07 Issue: 02 | Feb 2020 www.irjet.net p-ISSN: 2395-0072
© 2020, IRJET | Impact Factor value: 7.34 | ISO 9001:2008 Certified Journal | Page 1005
[21] F. Liu, C. Shen, G. Liu, and I. Reld, “Learning Depth from
Single Monocular Images Using Deep Convolutional
Neural Fields,” IEEE Transactions on Pattern Analysis
and Machine Intelligence, vol. 38, no. 10, pp.2024-2039,
Dec. 2016.
[22] Z. G. Li and J. H. Zheng, “Single image brightening via
exposure fusion,” in2016IEEEInternational Conference
on Acoustics, Speech and Signal Processing, pp. 1756-
1760, Shanghai, China, Mar. 2016.
[23] R. Achanta, A. Shaji, K. Smith, A. Lucchi, and P. Fua, “SLIC
superpixels compared to state-of-the-art superpixel
methods,” IEEE Transactions on Pattern Analysis and
Machine Intelligence, vol. 34, no. 11, pp. 2274-2282,
Nov. 2012.
[24] L. K. Choi, J. You, and A. C. Bovik, “Referenceless
prediction of perceptual fog density and perceptual
image defogging,” IEEE Transactions on Image
Processing, vol. 24, no. 11, pp. 3888-3901, Nov. 2015.
[25] S. G. Narasimhan and S. K. Nayar, “Shedding light on the
weather,” IEEE ConferenceComputerVisionandPattern
Recognition, 2003.s
[26] Z. G. Li and J. H. Zheng, “Single Image De-Hazing Using
Globally Guided Image Filtering,” IEEE Transactions on
Image Processing, vol. 27, no. 1, pp. 442-450, Jan. 2018.

More Related Content

PDF
A Novel Background Subtraction Algorithm for Dynamic Texture Scenes
PPT
The single image dehazing based on efficient transmission estimation
PDF
Survey on Haze Removal Techniques
PDF
A Review on Haze Removal Techniques
PPTX
OpenStreetMap in 3D - current developments
PDF
EVALUATION OF THE VISUAL ODOMETRY METHODS FOR SEMI-DENSE REAL-TIME
DOCX
A fast single image haze removal algorithm using color attenuation prior
PDF
Single Image Fog Removal Based on Fusion Strategy
A Novel Background Subtraction Algorithm for Dynamic Texture Scenes
The single image dehazing based on efficient transmission estimation
Survey on Haze Removal Techniques
A Review on Haze Removal Techniques
OpenStreetMap in 3D - current developments
EVALUATION OF THE VISUAL ODOMETRY METHODS FOR SEMI-DENSE REAL-TIME
A fast single image haze removal algorithm using color attenuation prior
Single Image Fog Removal Based on Fusion Strategy

What's hot (20)

PDF
A Three-Dimensional Representation method for Noisy Point Clouds based on Gro...
PDF
IJRET-V1I1P2 -A Survey Paper On Single Image and Video Dehazing Methods
PDF
“Efficient Deep Learning for 3D Point Cloud Understanding,” a Presentation fr...
PDF
Accelerated Joint Image Despeckling Algorithm in the Wavelet and Spatial Domains
PDF
Geometric wavelet transform for optical flow estimation algorithm
PDF
IRJET- Object Detection in Underwater Images using Faster Region based Convol...
PDF
Canny Edge Detection Algorithm on FPGA
PDF
Application of Image Retrieval Techniques to Understand Evolving Weather
PDF
B045050812
PDF
A new approach of edge detection in sar images using region based active cont...
PPTX
Fault Enhancement Using Spectrally Based Seismic Attributes -- Dewett and Hen...
PDF
IRJET- A Comprehensive Study on Image Defogging Techniques
PDF
IRJET- A Comparative Analysis of various Visibility Enhancement Techniques th...
PDF
Introduction to Wavelet Transform and Two Stage Image DE noising Using Princi...
PPTX
Background subtraction
PDF
Seed net automatic seed generation with deep reinforcement learning for robus...
PDF
A new approach of edge detection in sar images using
PPT
Automatic Dense Semantic Mapping From Visual Street-level Imagery
PDF
Moving object detection using background subtraction algorithm using simulink
PPTX
APPLICATION OF IP TECHNIQUES IN TRAFFIC CONTROL SYSTEM
A Three-Dimensional Representation method for Noisy Point Clouds based on Gro...
IJRET-V1I1P2 -A Survey Paper On Single Image and Video Dehazing Methods
“Efficient Deep Learning for 3D Point Cloud Understanding,” a Presentation fr...
Accelerated Joint Image Despeckling Algorithm in the Wavelet and Spatial Domains
Geometric wavelet transform for optical flow estimation algorithm
IRJET- Object Detection in Underwater Images using Faster Region based Convol...
Canny Edge Detection Algorithm on FPGA
Application of Image Retrieval Techniques to Understand Evolving Weather
B045050812
A new approach of edge detection in sar images using region based active cont...
Fault Enhancement Using Spectrally Based Seismic Attributes -- Dewett and Hen...
IRJET- A Comprehensive Study on Image Defogging Techniques
IRJET- A Comparative Analysis of various Visibility Enhancement Techniques th...
Introduction to Wavelet Transform and Two Stage Image DE noising Using Princi...
Background subtraction
Seed net automatic seed generation with deep reinforcement learning for robus...
A new approach of edge detection in sar images using
Automatic Dense Semantic Mapping From Visual Street-level Imagery
Moving object detection using background subtraction algorithm using simulink
APPLICATION OF IP TECHNIQUES IN TRAFFIC CONTROL SYSTEM
Ad

Similar to IRJET - Dehazing of Single Nighttime Haze Image using Superpixel Method (20)

PDF
IRJET- A Review on Image Denoising & Dehazing Algorithm to Improve Dark Chann...
PDF
Despeckling of Sar Image using Curvelet Transform
PDF
A Review on Airlight Estimation Haze Removal Algorithms
PDF
A Survey on Single Image Dehazing Approaches
PDF
A Novel Dehazing Method for Color Accuracy and Contrast Enhancement Method fo...
PDF
High Efficiency Haze Removal Using Contextual Regularization Algorithm
PDF
X-Ray Image Enhancement using CLAHE Method
PDF
Review on Various Algorithm for Cloud Detection and Removal for Images
PDF
I0343065072
PDF
Visibility Enhancement of Hazy Images using Depth Estimation Concept
PDF
IRJET-Underwater Image Enhancement by Wavelet Decomposition using FPGA
PPTX
New microsoft power point presentation
PDF
IRJET- A Comparative Review of Satellite Image Super Resolution Techniques
PDF
A Review on Deformation Measurement from Speckle Patterns using Digital Image...
PDF
IJARCCE 22
PDF
Survey on Image Integration of Misaligned Images
PDF
Visual Environment by Semantic Segmentation Using Deep Learning: A Prototype ...
PDF
An improved image compression algorithm based on daubechies wavelets with ar...
PDF
Survey on Various Image Denoising Techniques
PDF
Shadow Detection and Removal using Tricolor Attenuation Model Based on Featur...
IRJET- A Review on Image Denoising & Dehazing Algorithm to Improve Dark Chann...
Despeckling of Sar Image using Curvelet Transform
A Review on Airlight Estimation Haze Removal Algorithms
A Survey on Single Image Dehazing Approaches
A Novel Dehazing Method for Color Accuracy and Contrast Enhancement Method fo...
High Efficiency Haze Removal Using Contextual Regularization Algorithm
X-Ray Image Enhancement using CLAHE Method
Review on Various Algorithm for Cloud Detection and Removal for Images
I0343065072
Visibility Enhancement of Hazy Images using Depth Estimation Concept
IRJET-Underwater Image Enhancement by Wavelet Decomposition using FPGA
New microsoft power point presentation
IRJET- A Comparative Review of Satellite Image Super Resolution Techniques
A Review on Deformation Measurement from Speckle Patterns using Digital Image...
IJARCCE 22
Survey on Image Integration of Misaligned Images
Visual Environment by Semantic Segmentation Using Deep Learning: A Prototype ...
An improved image compression algorithm based on daubechies wavelets with ar...
Survey on Various Image Denoising Techniques
Shadow Detection and Removal using Tricolor Attenuation Model Based on Featur...
Ad

More from IRJET Journal (20)

PDF
Enhanced heart disease prediction using SKNDGR ensemble Machine Learning Model
PDF
Utilizing Biomedical Waste for Sustainable Brick Manufacturing: A Novel Appro...
PDF
Kiona – A Smart Society Automation Project
PDF
DESIGN AND DEVELOPMENT OF BATTERY THERMAL MANAGEMENT SYSTEM USING PHASE CHANG...
PDF
Invest in Innovation: Empowering Ideas through Blockchain Based Crowdfunding
PDF
SPACE WATCH YOUR REAL-TIME SPACE INFORMATION HUB
PDF
A Review on Influence of Fluid Viscous Damper on The Behaviour of Multi-store...
PDF
Wireless Arduino Control via Mobile: Eliminating the Need for a Dedicated Wir...
PDF
Explainable AI(XAI) using LIME and Disease Detection in Mango Leaf by Transfe...
PDF
BRAIN TUMOUR DETECTION AND CLASSIFICATION
PDF
The Project Manager as an ambassador of the contract. The case of NEC4 ECC co...
PDF
"Enhanced Heat Transfer Performance in Shell and Tube Heat Exchangers: A CFD ...
PDF
Advancements in CFD Analysis of Shell and Tube Heat Exchangers with Nanofluid...
PDF
Breast Cancer Detection using Computer Vision
PDF
Auto-Charging E-Vehicle with its battery Management.
PDF
Analysis of high energy charge particle in the Heliosphere
PDF
A Novel System for Recommending Agricultural Crops Using Machine Learning App...
PDF
Auto-Charging E-Vehicle with its battery Management.
PDF
Analysis of high energy charge particle in the Heliosphere
PDF
Wireless Arduino Control via Mobile: Eliminating the Need for a Dedicated Wir...
Enhanced heart disease prediction using SKNDGR ensemble Machine Learning Model
Utilizing Biomedical Waste for Sustainable Brick Manufacturing: A Novel Appro...
Kiona – A Smart Society Automation Project
DESIGN AND DEVELOPMENT OF BATTERY THERMAL MANAGEMENT SYSTEM USING PHASE CHANG...
Invest in Innovation: Empowering Ideas through Blockchain Based Crowdfunding
SPACE WATCH YOUR REAL-TIME SPACE INFORMATION HUB
A Review on Influence of Fluid Viscous Damper on The Behaviour of Multi-store...
Wireless Arduino Control via Mobile: Eliminating the Need for a Dedicated Wir...
Explainable AI(XAI) using LIME and Disease Detection in Mango Leaf by Transfe...
BRAIN TUMOUR DETECTION AND CLASSIFICATION
The Project Manager as an ambassador of the contract. The case of NEC4 ECC co...
"Enhanced Heat Transfer Performance in Shell and Tube Heat Exchangers: A CFD ...
Advancements in CFD Analysis of Shell and Tube Heat Exchangers with Nanofluid...
Breast Cancer Detection using Computer Vision
Auto-Charging E-Vehicle with its battery Management.
Analysis of high energy charge particle in the Heliosphere
A Novel System for Recommending Agricultural Crops Using Machine Learning App...
Auto-Charging E-Vehicle with its battery Management.
Analysis of high energy charge particle in the Heliosphere
Wireless Arduino Control via Mobile: Eliminating the Need for a Dedicated Wir...

Recently uploaded (20)

PDF
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
PPTX
Welding lecture in detail for understanding
PDF
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
PDF
Model Code of Practice - Construction Work - 21102022 .pdf
PDF
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
PDF
PPT on Performance Review to get promotions
PDF
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
PDF
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
PDF
Well-logging-methods_new................
PPTX
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
PPTX
IOT PPTs Week 10 Lecture Material.pptx of NPTEL Smart Cities contd
PPT
Project quality management in manufacturing
PDF
Embodied AI: Ushering in the Next Era of Intelligent Systems
PPTX
UNIT 4 Total Quality Management .pptx
PPTX
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
PPTX
CYBER-CRIMES AND SECURITY A guide to understanding
PPT
Mechanical Engineering MATERIALS Selection
PDF
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
DOCX
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
PPTX
Lecture Notes Electrical Wiring System Components
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
Welding lecture in detail for understanding
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
Model Code of Practice - Construction Work - 21102022 .pdf
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
PPT on Performance Review to get promotions
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
Well-logging-methods_new................
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
IOT PPTs Week 10 Lecture Material.pptx of NPTEL Smart Cities contd
Project quality management in manufacturing
Embodied AI: Ushering in the Next Era of Intelligent Systems
UNIT 4 Total Quality Management .pptx
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
CYBER-CRIMES AND SECURITY A guide to understanding
Mechanical Engineering MATERIALS Selection
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
Lecture Notes Electrical Wiring System Components

IRJET - Dehazing of Single Nighttime Haze Image using Superpixel Method

  • 1. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 07 Issue: 02 | Feb 2020 www.irjet.net p-ISSN: 2395-0072 © 2020, IRJET | Impact Factor value: 7.34 | ISO 9001:2008 Certified Journal | Page 998 Dehazing Of Single Nighttime Haze Image using Superpixel Method A. Susmi Raj1, R. Soundararajan2 1PG Scholar, Dept. of Computer Science and Engineering, SVS College of Engineering Coimbatore, Tamilnadu, India 2Assistant Professor, Dept. of Computer Science and Engineering, SVS College of Engineering Coimbatore, Tamilnadu, India ---------------------------------------------------------------------***---------------------------------------------------------------------- Abstract - The open air picturescapturedinharshclimate are corrupted due to the nearness of fog, mist, rain and so on. Pictures of scenes captured in lousy climate have destitute contrasts and colors. This might also cause problem in spotting the objects within the captured murky images. Dueto murkiness there may be trouble to numerous computer vision application because it diminishes the perceivability of the scene. Picture de-hazing is one of the essential imperative inspect variety in picture preparing. Cloudinessisgenuinely an Climatic effect. In this paper a novel super-pixel based single image haze removal algorithm with a neural network method is proposed for nighttime haze image. The input nighttime haze image is first decomposed into a glow image and glow-loose nighttime haze picture the usage of their relative smoothness. A super pixel based approach is then brought to compute the price of atmospheric light and dark channel for each pixel in the glow free haze image. The transmission map is decomposed from the darkish channel of the glow-unfastened haze photo by means of the weighted guided photograph filter. Since super- pixels normally adhere to the limits of gadgets well, a smaller nearby window size may be selected. In addition to this the gathered images (the hazy images) will input to a neural network that will increase the quality oftheoutput imagethat the actual image we want to get. Key words: Nighttime image haze removal, Glow decomposition, Morphological artifacts, Super-pixel segmentation, Weighted guided image filtering 1. INTRODUCTION In real life, natural phenomena (inclusive of rain, haze,snow etc)can lead to degradation of outdoor images. Because in these environments, massive amount of particle floating in the atmosphere, mild propagation within the atmosphere will be suffering from these floating particles.Inthesector of computer vision image de-hazing has a much broader demand. From the perspective of image processing, it may provide pretreatment for some visual algorithms. Besides from the practical factor of view, it plays an important role in military system and civil system [1]. Haze free images makes the scene look more practical and provide more useful information. Thereforethestudiesof imagede-hazing has critical practical importance and developmentprospect. There were many haze removal techniques which are primarily based on the optical model developed fordaytime haze images. At present, the most widely bodily model is a linear equation such as transmission and atmospheric light. According to the version, day time de-hazingtechniquewant to estimate the atmospheric mild and transmission map. Many classic strategies use additional information or multiple photographs to do away with haze from the haze snapshot. For example Schechner et al. [2] proposed a novel technique of using images with different polarizer orientation to de-hazing. Lai et al. [3] proposed an interesting technique to derive the most beneficial transmission map at once from the sunlight hours haze image version. In [4], a quick algorithm for single image de- hazing is proposed based totally on linear transformation with the aid of assuming that a linear relationship existence inside the minimal channel between the hazy image and the haze-free image. On top of dark channel prior, He et al. introduced a simple method to study single image haze elimination in [5]. The dark channel prior is based on a commentary that most nearby patches in haze-free scene images include a few pixels that have very low intensities in at the least one coloration channel. Many dark channel prior based totally haze removal algorithms were delivered since then [7][8]. The dark channel earlier based totally methods usually work well, but the dark channel earlier has limitations. For example,morphological artifactsareanissue for the darkish channel previous while the preliminary transmission map is computed use of the dark channel earlier [5]. A simple edge retaining decomposition based totally framework changed into proposed for single image haze removal in [9]. Simplified dark channel of the haze image is decomposed into a base layer and a element layer via the weighted guided image filter (WGIF) in [11], and the transmission map is expected from the base layer. Image enhancement based techniques alsoareappliedtotheimage de-hazing, for example [12][13][14]. Even though these techniques generally carry out well for daylight hours haze image shots, they are now not properly ready to correct nighttime scenes due to varying imaging conditions such as active light assets or glow effects. Due to existence of active light assets, for example, avenue lighting et al, and their related glow, the version of daytime haze images are now not applicable to the midnight haze image. Recently, several interesting methods have beenemergedto decorate nighttime haze images [15][16][17][20]. Pei and Lee [15] introduced a method primarily based the color transfer processing. The colorings of a nighttime hazeimage had been mapped to those of a daytime haze image. Then, a dark channel prior primarily based set of rules was supplied to estimate the transmission map. A post-processing step become also furnished to improvetheinsufficient brightness
  • 2. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 07 Issue: 02 | Feb 2020 www.irjet.net p-ISSN: 2395-0072 © 2020, IRJET | Impact Factor value: 7.34 | ISO 9001:2008 Certified Journal | Page 999 and coffee overall evaluation of haze-free image. Due to the coloration transfer, even though their approach has the dependable de-hazing quality, the shade of the complete haze free image looks unnatural. Zhang et al [16] proposeda new photo model which accounted for various illumination. First, the mild intensity became estimated and enhanced to achieve an illumination balanced result, then, they expected the shade traits of the incident light, finally, they used the darkish channel previous to cast off the haze. The main contribution of this paper is a new type of haze elimination algorithm using the concept of super-pixel. Compared with the patch based totally methods, the super- pixel can be used to reduce morphologic artifacts because of the patch. This is due to the fact that the super-pixel can adhere to boundaries of items within thehazeimage well. As such, the radius of the WGIF may be reduced. Subsequently, more exceptional details may be preserved. In addition, the proposed super-pixel based totally technique has more chance to correctly estimate the transmission maps for all pixels I white gadgets nearby a digital camera than algorithms in [9][17] and [25]. In other words, the proposed method provides a technique to a challenging problem on the estimation of transmissionmapsforpixelsin whiteitems close by the digital camera. The rest of this paper is organized as preliminaryknowledge,procedureofnighttime single image haze removal, experimental results and conclusion. 2. PRELIMINARY KNOWLEDGE Since the WGIF and the SLIC might be implemented in the proposed method, the relevant expertise on them are summarized in this section. 2.1. Weighted Guided Image Filter Denote the guidance photograph as I, the filtering output as Z, and the input photograph as X. The pictures I and X can be identical. Z is thought to be a linear remodel of I in a window Ωk focused at the pixel p, and the radius of the window Ωk is r Z(p) = ak I(p)+bk,∀p ∈ Ωk, (1) Wherein the price of (ak,bk) can be computed by means of minimizing the following cost function in the window Ωk. E(ak,bk) = ∑ p∈Ωk ((ak I(p)+bk – Xp)2 +λak 2 ), (2) And λ is a regularization parameterpenalizinglargeak.Since, halo artifacts may appear on some edges, an edge- conscious weighting is introduced and incorporated into the GIF to form the WGIF. The edge-aware weighting ΓI(k) is : E(ak,bk) = ∑ p∈Ωk ((ak I(p)+bk − X(p))2 + λ ΓI(k) a 2 k ), (3) The solutions of ak and bk are given as: ak = µI ⊙X,r (k)−µI,r(k)µX,r(k) σ 2 I,r (k) + λ ΓI (k) , (4) bk = µX,r(k)−akµI,r(k) (5) 2.2 Super-pixel Segmentation Via The SLIC A super-pixel is a small region composed by a sequence of pixels with adjacent positions, similar shade, similar brightness, similar texture and different characteristics.The SLIC (Simple Linear Iterative Clustering) is a common approach of the super pixel segmentation. This method can phase pixels quick and simply, besides, it may perceive barriers better. The color images are converted into a five- dimensional feature vectors V = [l,a,b,x,y] in CIELAB color area and XY coordinate. Where, [l,a,b] suggests the color of pixel, [x,y] shows the placement of pixel. It is a distance measurement standard for five-dimensional characteristic vectors, and a neighborhood clustering procedure for photo pixels. SLIC set of rules can generate compact and about uniform super-pixels. Besides, t hasa highoverall evaluation in terms of speed, contour preserving and hyperpixel shape, and the segmentation impact is in keeping with people’s expectation. The metric of SLIC is given as: dlab = √ (l(p)−l(p ′ ))2 +(a(p)−a(p ′ ))2 +(b(p)−b(p ′ ))2 (6) dxy = √ (x(p)− x(p ′ ))2 +(y(p)−y(p ′ ))2 (7) Ds = √ ( dlab Nlab )2 +( dxy Nxy )2 (8) where, dlab represents color distance; dxy represents spatial distance; p and p ′ are two pixels; Nxy = √ K N , where K is called the total number of pixels, N is the number of the super pixels; Nlab varies with special images, however it is normally a set value. The super-pixels normally can pick out obstacles better. So, it is able to be expected that the concept of super-pixel may be used to reduce the morphological artifacts within the single image haze removal. 3. NIGHTTIME SINGLE IMAGE HAZE REMOVAL In this section, first provide information on the model of nighttime haze images. Based on this version, the glow is eliminated from the input photograph therebytheglow-free haze image is obtained. The model of the glow-loose haze photo is similar with the version of sunlight hours haze picture except that the atmospheric light is spatially varying inside the nighttime haze photograph. It is thus predicted that existing dark channel based algorithm for sunlight hours haze removal may be prolonged to address nighttime haze image. Based on this observation, a super-pixel based haze elimination algorithm is offered for the glow-free nighttime haze images.
  • 3. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 07 Issue: 02 | Feb 2020 www.irjet.net p-ISSN: 2395-0072 © 2020, IRJET | Impact Factor value: 7.34 | ISO 9001:2008 Certified Journal | Page 1000 3.1 Modelling Of A Nighttime Haze Image Since the light source best from the sun, the sunlight hours images are not stricken by otherlightsources. Inthedaylight hours haze image de-hazing algorithm, the model is generally used as; Xc(p) = Zc(p)t(p) + Ac(1−t(p)) (9) When a photo is captured at night, light sources especially come from road lighting fixtures and car lighting etc. These lights aren’t international uniform, thus; the glow is present inside the image. The glow is an atmospheric point spread function (APSF). Inspired by this, the complete nighttime haze scenes by adding the glow version into the daylight haze picture model: Xc(p) = Zc(p)t(p)+ Ac(1−t(p))+ Aa ∗ APSF (10) In which c∈{r,g,b} represents coloration channel index, Xc shows discovered nighttime images. Zc represents the scene radiance vector. T is the transmission map and indicates the elements of light that penetrates via the haze. Ac represents atmospheric light which is not always globally uniform any longer. Aa indicates active light sources, that the intensity is convolved with APSF. So that it will get the scene radiance vector, this is, the haze free image Z, the glow (that is Aa*APSF) are decomposed from the input photograph (Xc(p)), and the atmospheric light Ac and the transmission map t are estimated. 3.2 Glow Removal from the Input Image From the unprocessed images, we can see that glow can reduce the visibility of the photograph or may bemakea few objects unseen. In order to attain an excellent visual image, the glow should be eliminated from the input photograph. For simplicity, the equation (10) can be written as: Xc(p) = Jc(p)+Gc(p) (11) Where, Jc(p) = Zc(p)t(p) + Ac(1-t(p)), we name it a glow free nighttime haze image, and Gc(p) = Aa* APSF, it’s far a glow photograph. From the equation (12), the glow elimination can be seemed as a layer separation problem. The objective function for glow layer separation may be described as follow: E(J) = ∑ p (ρ(J(p) ∗ f1,2) + λ1((X(p)− J(p)) ∗ f3)2) s.t.0 ≤ J(p) ≤ X(p) (12) ∑Jr(p) = ∑ p Jg(p) = ∑ p Jb(p) Where, f12 is the 2-route first spinoff filters. f3 is the second order Laplacian filter out and the operator ∗ denotes convolution. ρ(µ) = min(µ2,τ) is a robust characteristic which preserves large gradients of the input photo X within the closing nighttime haze layer [17]. For simplicity, we outline FjL = L ∗ fi then the objective function can be rewritten as; E(J) = ∑ p (ρ(F1,2 J(p)) + λ1(F3 J(p)− F3 X(p))2 ) (13) According to the method [23], with a view to pass the FjL time period outdoor the ρ(·) function, a weightβisdelivered to the goal function, the brandnewobjectivefunctionmaybe written as; E(J) = ∑ p (β(F1,2 J(p)−g1,2)2 +ρ(g1,2) + λ1(F3 J(p)− F3 X(p))2 ) (14) (a) (b) (c) (d) (e) (f) Fig.1: (a,d) two nighttime haze pictures; (b,e) glow pix with the aid of the method[23]; (c,f) glow photographs by the proposed algorithm. The glow pictures produced with the aid of the proposed set of rules are smoother. It can be validated that the above solution is an approximated solution at the same time as the provided solution within the equation (16) is an precise one. Therefore, our output is more correct than the output in [23]. The Fig.1 indicates separated glow picture with the aid of the algorithm in [23] and the proposed set of rules. Clearly, the glow’s element in (c,f) is greater meticulous and non-stop than (b,e). Therefore, the proposed glow decomposition method is higher than the technique in [23].Since the input image consists of a glow picture and a glow-loose haze photo the procedureofglowdecomposition directly impacts the compositionofglow-freeimage,thereby
  • 4. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 07 Issue: 02 | Feb 2020 www.irjet.net p-ISSN: 2395-0072 © 2020, IRJET | Impact Factor value: 7.34 | ISO 9001:2008 Certified Journal | Page 1001 (a) (b) (c) (d) (e) (f) (g) (h) (i) (j) Fig. 2: (a,f) two haze images; (b,g) glow images by the method [23]; (c,h) glow-free haze images by the method [23]; (d,i) glow images by the proposed algorithm; (e,j) glow-free haze images by the proposed algorithm. There are more details in the glow-free haze images by the proposed algorithm affecting the high-quality of de-hazed picture. The Fig.2 shows the glow images and glow-loose images by means of the Li’s algorithm and the proposed algorithm. It may be visible from the Fig.2 (c)(e) that the glow may be removed better through our algorithm, and from the Fig.2 (h)(j) that the glow unfastened haze photographs are toward the nature scene. 3.3 Atmospheric Light Estimation After the glow being removed from the input image, a glow- free nighttime haze image is finded. In order to repair the haze-free photo, the atmospheric light Ac and the transmission map t is need to be estimated.Sincethemodels of a daytime haze image are almost the same besides for the atmospheric light, it is able to be expectedthat some existing method of daytime haze image de-hazing can be carried out to removal the haze from the glow-free nighttime haze image. The atmospheric light is always appeared as the brightest shade within the daylight haze image. However, due to the presence of the light sources, the atmospheric light is not always global uniform inside the glow-free nighttime haze image. According to the photograph formation model, the picture Zc(p) is a produce of illumination and reflectance components: Zc(p) = Ac(p)Rc(p) (15) where Rc(p) is the reflectance component. The glow-free nightime haze image is then represented as: Xc(p) = Ac(p)Rc(p)t(p) + Ac(p)(1−t(p)) (16) the components Ac(p) and t(p) are assumedto beconstantin a local window of p. It can then be derived that max p ′∈Ω(p) {Xc(p ′ )} = Ac(p) max p ′∈Ω(p) {Rc(p ′ )}t(p)+ Ac(p)(1−t(p)) (17) Using the following maximum reflectance prior [24], max p ′∈Ω(p) {Rc(p ′ )} = 1 (18) it can be derived that Ac(p) = max p ′∈Ω(p) {Xc(p ′ )}. (19) (a) (b) (c) Fig. 3: (a) a glow-free haze image; (b) an initial atmospheric light image; (c) a refined atmospheric light image. There are morphological artifacts in the initial atmospheric light image and they are removed by the WGIF. On the premise of the feature ofsuper-pixel,theatmospheric light more risk to be uniform in each super-pixel. Hence, the glow free nighttime haze image Jc is decomposed into N super-pixels in place of a grid of small areas in [17]. The brightest pixel within the ith super-pixel and it is assigned to all the pixels inside the super-pixel to make up the preliminary atmospheric light IAc(p). Even though the morphological artifacts are reduced the usage of the super- pixels, there are still visible morphological artifacts. WGIF can be followed to remove the morphological artifacts. The
  • 5. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 07 Issue: 02 | Feb 2020 www.irjet.net p-ISSN: 2395-0072 © 2020, IRJET | Impact Factor value: 7.34 | ISO 9001:2008 Certified Journal | Page 1002 WGIF can be followed to remove the morphological artifacts in order to get the clean global atmospheric light Ac(p). The photos to be filtered are IAc(p) and the guidance images are the color components of the glow-free nighttime haze images Jc(p). (a) (b) (c) (d) (e) (f) Fig.4: (a),(d) are the equal glow-free haze image by way of the proposed method; (b),(c) respectively is an atmospheric light image, de-hazed image via the method [17]; (e),(f) respectively is an atmospheric light image, de- hazed image by means of the proposed method. 3.4 Transmission Map Estimation After getting the atmospheric light, the transmission map is estimated to recover the scene radiance. Here the conceptof DehazeNet is used to estimate the transmission map. DehazeNet is an give up-to-end system. It without delay learns and estimates the mapping relations among hazy image patches and their medium transmissions. This is completed through special design of its deep architecture to encompass establishedimagede-hazingprinciples.Ipropose a novel nonlinear activation feature in DehazeNet, clled Bilateral Rectified Linear Unit (BReLU). BReLU is a singular activationcharacteristic thatisbeneficial for image recovery and reconstruction. BReLU extends Rectified Linear Unit(ReLU) and demonstrates its importance in obtaining accurate image recuperation. Technically BReLU uses the bilateral Restraint to lessen search space and enhance convergence. I establish connections between components of DehazeNet and those assumptions/priors used in existing dehazing method, and explain that DehazeNet improves over these strategies by automatically gaining knowledge of all these additives from give up to quit. Neural network (DehazeNet) net are designed to implement four sequential operations for Medium transmissionestimation,namely, featureextraction, multi- scale mapping, local extremum, and nonlinear regression are represented in Fig.5. 1) Feature Extraction: Feature extraction is performed by the usage of the convolution layer of nueral network. The input image can be filtered with different kinds of filters. So we can maintain the vital info and can do away with the unwanted details. Convolution is performed by the usage of 5 filters of length 3X5. 2) Multi- Scale Mapping: Here we can filter with different filters having different scaleslike 4X3,4X5,4X7.Then wecan analyze the details clearly. 3) Local Extremum: When the last two steps are completed, we get a lot of filtered images. Local extremumis the process of selecting the most important values from these filtered images. 4) Nonlinear Regression: Non-linear regression isthelast step. It is the process of selecting the most important values and getting the output. When it does, we get a transmission map. After getting the atmospheric light, the transmission map is estimated to recover the scene radiance. 3.5 Recovery of the Scene Radiance The haze-free image Zc(p) can be recovered the use of equation (20). It is worth nothing that the value of tp is close to zero, Zc(p)t(p) is also near to zero. In this situation, if the haze free image is restored, the noise inside the haze free image will be substantially amplified. Thus a lower sure tm which was added to constrain tp can lessen the noise. The value of the tm is decided by way of the denses of thehaze.Its value is decided on as 0.2 if the haze is not heavy, 0.375 otherwise. The final image is restored as Zc(p) = Jc(p)− Ac(p) max(t(p),tm) + Ac(p) (20) It is well worth nothing that the proposed edition mechanism of tm is coarse and pleasant of de-hazed images may want to be advanced through a finer choice of tm with appreciate to special haze levels. 4. Experimental Results Here I compare proposed algorithm with the daylight hours haze elimination algorithms in [5] and four nighttime haze elimination algorithms in [16][17] and [24]. 4.1 Comparison Among Different Haze Removal Algorithm For comparison, the resultstheuseofdehazingalgorithmsin [5], [16], [17], [20], [24] and proposed algorithm are shown within the Fig.14. He’s approach in [5] is a classical set of rules for daytime haze images. It is not alwayssurprisedthat it is not appropriate to put off the haze from
  • 6. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 07 Issue: 02 | Feb 2020 www.irjet.net p-ISSN: 2395-0072 © 2020, IRJET | Impact Factor value: 7.34 | ISO 9001:2008 Certified Journal | Page 1003 Fig. 5 The architecture of DehazeNet. DehazeNet conceptually consists of four sequential operations (feature extraction, multi-scale mapping, local extremum and non-linear regression). (f) (g) (h) (i) (j) (k) (l) (m) (n) (o) Fig.6: (a,f,k) haze images; (b,g,l) dehazed images by the method in [5]; (c,h,m) dehazed images by method [6]; (d,I,n) dehazed images by the method in [17]; (e,j,o) dehazed images by proposed algorithm. nighttime haze image. Only part of haze can be removed via this method and the glow nonetheless exists in the final pictures. The techniques in [16] and [17] are designed for images, there are visible halo artifacts and noise is amplified within the de-hazed snap shots as illustrated in the Fig.6 (h),(m) glow around the mild sources. The method in [16] was improved in [24]. Neither the algorithm in [20] nor the set of rules in [24] addressed the glow artifacts. As such, the glow artifacts are grater visible within the de-hazed images by way of the algorithms in [20] and [24]. It is well worth nothing that a bigger tm is selected in the equation (20) to keep away from amplifying noise in the sky regions. On the other hand, excellent details can also be smoothedthroughtheproposed algorithm. It is desired to layout a finer tm for the proposed set of rules. In spite of our algorithm works nicely in de- hazing of nighttime haze image, there are still some deficiencies. For example, the haze removal image appears darker than the input image. Fortunately, this problem may be addressed using a single image brightening algorithm in [26]. The running time of the proposed algorithm is barely longer because of segment the glow-free image into super-pixels. The approach in [17] works better than the method [16]. Unfortunately, noise is likewise amplified and halo artifacts nevertheless exists inside the final images.Thesetofrulesin [17] money owned that the atmospheric light is locally regular in a grid of small area. However, this is not continually true. On the other hand, accordingtocapabilities of the super-pixels, it’s far more likely that the atmospheric mild is regular within the super-pixel. (a) (b) (c) (d) (e)
  • 7. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 07 Issue: 02 | Feb 2020 www.irjet.net p-ISSN: 2395-0072 © 2020, IRJET | Impact Factor value: 7.34 | ISO 9001:2008 Certified Journal | Page 1004 5. CONCLUSION This paper presented a novel super-pixel based nighttime haze image de-hazing set of rules. The proposed set of rules can eliminate the glow and the haze higher than thestate-of- the art methods. Local exceptional detailsarealsopreserved better, and the color distortion and halo artifacts are glaringly reduced. As such, the haze-loose image is in the direction of a nature scene. On the alternative hand, because segmentation of glow-free nighttimehazeimagesintosuper- pixels is required with the aid of the proposedalgorithm, the running time and the complexity are increased. It is known that the existing single image haze elimination algorithms suffer from amplifying noise inside the sky region. This problem could be addressed by means of deciding on a finer adaptive tm inside the equation (20). This problem can be studied in future research. REFERENCES [1] S. G. Narasimhan and S. K. Nayar, “Vision and the atmosphere,” International Journal onComputerVision, vol. 48, no. 3, pp. 233-254, Jun. 2002. [2] Y. Y. Schechner, S. G. Narasimhan, and S. K. Nayar, “Polarization based vision through haze,” Applied Optics, vol. 42, no. 3, pp. 511-525, 2003. [3] Y. H. Lai, Y. L. Chen, C. J. Chiou, and C. T. Hsu, “Single- image dehazing via optimal transmission map under scene priors,” IEEE Transactions on Circuit andSystems for Video Technology, vol. 25, no. 1, pp. 1-14, Jan. 2015. [4] W. C. Wang, X. H. Yuan, X. J. Wu, and Y. L. Liu, “FastImage Dehazing Method Based on Linear Transformation,” IEEE Transactions on Multimedia, vol. 19, no. 6, pp. 1142-1155, Jun. 2017. [5] K. M. He, J. Sun, and X. O. Tang, “Guided image filtering,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no.6, pp. 1397-1409, Jun. 2013. [6] F. Kou, W. H. Chen, C. Y. Wen, and Z. G. Li, “Gradient Domain Guided Image Filtering,” IEEE Transactions on Image Processing, vol. 24, no.11, pp. 4528-4539, Nov. 2015 [7] Z. G. Li, J. H. Zheng, W. Yao, and Z. J. Zhu, “Single image haze Removal via a simplified dark channel,” in 2015 IEEE International Conference on Acoustics,Speechand Signal Processing, pp. 1608-1612, Apr. 2015. [8] Q. S. Zhu, D. Wu, Y. Q. Xie, and L. Wang, “Quick shift segmentation guided single image haze removal algorithm,” Proceedings of the 2014 IEEE International Conference on Robotics and Biomimetics, pp. 113- 117, Dec. 2014 [9] J. Zhang, Y. Cao, and Z. Wang, “Nighttime haze removal based on a New Imaging Model,” IEEE International Conference on Image Processing, pp. 4557-4561, Oct. 2014.. [10] C. Ancuti, C. O. Ancuti, C. De Vleeschouwer, and A. C. Bovik, “Nighttime dehazing by fusion,” IEEE International Conference on Image Processing, pp. 2256-2260, 2016. [11] Z. G. Li, J. H. Zheng, Z. J. Zhu, W. Yao, and S. Q. Wu, “Weighted guided image filtering,” IEEETransactionson Image Processing, vol. 24, no. 1, pp. 120-129, Jan. 2015. [12] H. T. Xu, G. T. Zhai, X. L. Wu, and X. K. Yang, “Generalized Equalization Model for Image Enhancement,” IEEE Transactions on Multimedia, vol. 16, no. 1, pp. 61-82, Jan. 2014. [13] K. T. Shih and H. H. Chen, “Exploiting Perceptual Anchoring for Color Image Enhancement,” IEEE Transactions on Multimedia, vol. 18, no. 2, pp. 300-310, Feb. 2016. [14] F. Kou, Z. Wei, W. H. Chen, X. M. Wu, C. Y. Wen, and Z. G. Li, “Intelligent detail enhancement for exposurefusion,” IEEE Transactions on Multimedia, Aug. 2017. [15] S. C. Pei and T. Y. Lee, “Nighttime haze removal using color transfer pre-processing and dark channel prior,” IEEE International Conference on Image Processing,pp. 957-960, Oct. 2012. [16] J. Zhang, Y. Cao, and Z. Wang, “Nighttime haze removal based on a New Imaging Model,” IEEE International Conference on Image Processing, pp. 4557-4561, Oct. 2014. [17] Y. Li, R. T. Tan, and M. S. Brown, “Nighttime haze removal with glow and multiple light colors,” IEEE International Conference on Computer Vision, pp. 226- 234, Dec. 2015 [18] Y. Li and M. S. Brown, “Single image layer separation using relative smoothness,” IEEE Conference Computer Vision and Pattern Recognition, 2014. [19] J. Zhang, Y. Cao, S. Fang, Y. Kang, and C. W. Chen, “Fast haze removal for nighttime image using the reflectance prior,” in 2017 Conference on Computer Vision and Pattern Recognition, Jul. 2017. [20] K. M. He, J. Sun, and X. O. Tang, “Single image haze removal usingdark channel prior,”IEEETransactionson Pattern Recognition and MachineIntelligence,pp.2341- 2353, 2011.
  • 8. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 07 Issue: 02 | Feb 2020 www.irjet.net p-ISSN: 2395-0072 © 2020, IRJET | Impact Factor value: 7.34 | ISO 9001:2008 Certified Journal | Page 1005 [21] F. Liu, C. Shen, G. Liu, and I. Reld, “Learning Depth from Single Monocular Images Using Deep Convolutional Neural Fields,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 38, no. 10, pp.2024-2039, Dec. 2016. [22] Z. G. Li and J. H. Zheng, “Single image brightening via exposure fusion,” in2016IEEEInternational Conference on Acoustics, Speech and Signal Processing, pp. 1756- 1760, Shanghai, China, Mar. 2016. [23] R. Achanta, A. Shaji, K. Smith, A. Lucchi, and P. Fua, “SLIC superpixels compared to state-of-the-art superpixel methods,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 11, pp. 2274-2282, Nov. 2012. [24] L. K. Choi, J. You, and A. C. Bovik, “Referenceless prediction of perceptual fog density and perceptual image defogging,” IEEE Transactions on Image Processing, vol. 24, no. 11, pp. 3888-3901, Nov. 2015. [25] S. G. Narasimhan and S. K. Nayar, “Shedding light on the weather,” IEEE ConferenceComputerVisionandPattern Recognition, 2003.s [26] Z. G. Li and J. H. Zheng, “Single Image De-Hazing Using Globally Guided Image Filtering,” IEEE Transactions on Image Processing, vol. 27, no. 1, pp. 442-450, Jan. 2018.