SlideShare a Scribd company logo
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 04 Issue: 11 | Nov -2017 www.irjet.net p-ISSN: 2395-0072
© 2017, IRJET | Impact Factor value: 5.181 | ISO 9001:2008 Certified Journal | Page 1557
INTENSITY ENHANCEMENT IN GRAY LEVEL IMAGES USING HSV COLOR
CODING TECHNIQUE
Er.Harteesh Kumar1, Dr.Rahul Malhotra2, Er.Ashish Grover3
1M.Tech (Student), Department Of E.C.E GTBKIET Chhapianwali, Malout, PUNJAB, INDIA
2DIRECTOR & PRINCIPAL, GTBKIET Chhapianwali, Malout, PUNJAB, INDIA
3Assistant Professor, Department Of E.C.E GTBKIET Chhapianwali, Malout, PUNJAB, INDIA
---------------------------------------------------------------------***---------------------------------------------------------------------
Abstract The capability will increase the efficiency by
decreasing the time required to visualize the image clearly
and reduce the probability of errors due tofatigue.Thework
in this thesis is motivated from the practical point ofview by
several shortcomings of traditional methods. The first
problem is the inability of all known traditional methods to
properly segment the objects from the background without
the interference from object shadows and highlights.
Moreover there is inadequate research on the combination
of hue- and intensity-based similarity measures to improve
color. The ineffective use of color can degrade anapplication
reference and lesson user application. TheHSVcolorspaceis
quite similar to the way in which humansperceivecolor.The
other models define color in relation to the primary colors.
RGB is not efficient since it uses equal bandwidth for each
color component. However, human eye is more sensitive to
the luminance component than the color component. Thus,
many image coding standards use HSV color scheme.
Key Words: HSV, Pixels, RGB, CIE, Gray Scale Image
1.INTRODUCTION
This introduction is a practical guide to the challenges, and
the hardware and algorithms used to meet them. Image
processing modifies pictures to improve them
(enhancement, restoration), extract the information
(analysis, recognition), and change their structure
(composition, image editing). Images can be processed by
optical, photographic, and electronic means, but image
processing using digital computers is the most common
method because digital methods are fast, flexible, and
precise. An image can be synthesized from a micrograph of
various cell organelles by assigning alight intensity value to
each cell organelle. The sensor signal is “digitized”--
converted to an array of numerical values, each value
representing the light intensity of a small area ofthecell.The
digitized values are called picture elements, or “pixels,” and
are stored in computer memory as a digital image. A typical
size for a digital image is an array of 512 by 512 pixels,
where each pixel has value in the range of 0 to 255. The
digital image is processed by a computer to achieve the
desired result. Image enhancement improves the quality
(clarity) of images for human viewing. Removing blurring
and noise, increasing contrast, and revealing details are
examples of enhancement operations. Reducing the noise
and blurring and increasing the contrast range could
enhance the image. The original image might have areas of
very high and very low intensity, which mask details. An
adaptive enhancement algorithm reveals these details.
Adaptive algorithms adjust their operation based on the
image information (pixels) being processed. In this case the
mean intensity, contrast, and sharpness (amount of blur
removal) could be adjusted based on the pixel intensity
statistics in various areas of the image. Images are produced
by a variety of physical devices, including still and video
cameras, x-ray devices, electron microscopes, radar, and
ultrasound, and used for a variety of purposes, including
entertainment, medical, business (e.g. documents),
industrial, military, civil (e.g. traffic),security, and scientific.
The goal in each case is for an observer, human or machine,
to extract useful information about the scene being imaged.
Often the raw image is not directly suitable for this purpose,
and must be processed in some way. Such processing is
called image enhancement; processing by an observer to
extract information is called image analysis. Enhancement
and analysis are distinguished by their output, images vs.
scene information, and by the challenges faced and methods
employed. Image enhancement has been done by chemical,
optical, and the electronic means, while analysis has been
done mostly by humans and electronically. Digital image
processing is a subset of the electronic domain wherein the
image is converted to an array of small integers,called pixels,
representing a physical quantity such as scene radiance,
stored in a digital memory, and processed by computer or
other Digital hardware. Digital image processing, either as
enhancement for human observers or performing
autonomous analysis, offers advantages in cost, speed, and
flexibility, and with the rapidly falling price and rising
performance of personal computers it has become the
dominant method in use
1.1Concept Of Image
Perhaps the first guiding principal is that humans are better
at judgment and machines are better atmeasurement.Along
these lines image enhancement, which generally requires
lots of numeric computation but little judgments, is well-
suited for digital processing. On top of this many digital
image processingapplicationsareconstrainedbysevere cost
targets. Thus we often face the engineer’s dreaded triple
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 04 Issue: 11 | Nov -2017 www.irjet.net p-ISSN: 2395-0072
© 2017, IRJET | Impact Factor value: 6.171 | ISO 9001:2008 Certified Journal | Page 1558
curse, the need to design something good, fast, and cheap all
at once.
1.1.1 Types of Digital Images
There are two important types of digital images—color and
black and white. Color images are made up of colored pixels
while black and white images are made of pixels in different
shades of gray.
(a) Black and White Images
A black and white image is made up of pixels each of which
holds a single number corresponding to the gray level of the
image at a particular location. These gray levels span the full
range from black to white in a series of very fine steps,
normally 256 different grays. Since the eye can barely
distinguish about 200 different gray levels, this is enough to
give the illusion of a steeples tonal scale .Assuming 256 gray
levels, each black and white pixel can be stored in a single
byte (8 bits) of memory.
(b) Color Images
A color image is made up of pixels each of which holds three
numbers corresponding to the red, green, and blue levels of
the image at a particular location. Red, green, and blue
(sometimes referred to as RGB) are the primary colors for
mixing light-these so-called additive primary colors are
different from thesubtractiveprimarycolorsusedformixing
paints (cyan, magenta, and yellow). Any color canbecreated
by mixing the correct amounts of red, green, and blue light.
Assuming 256 levels for each primary, each color pixel can
be stored in three bytes (24 bits) of memory. This
corresponds to roughly16.7 milliondifferentpossiblecolors.
Note that for images of the same size, a black and white
version will use three times less memory than a color
version.
(c) Binary or Bi-level Images
Binary images use only a single bit to represent each pixel.
Since a bit can only exist in two states—on or off, every pixel
in a binary image must be one of two colors, usually black or
white. This inabilitytorepresentintermediateshadesof gray
is what limits their usefulness in dealing with photographic
images.
(d) Indexed Color Images
Some color images are created using a limited palette of
colors, typically 256 different colors. These images are
referred to as indexed colorimagesbecausethedata foreach
pixel consists of a palette indexindicating whichofthecolors
in the palette applies to that pixel. There are several
problems with using indexed color to represent
photographic images. First, if the image contains more
different colors than are in the palette, techniques such as
dithering must be applied to represent the missing colors
and this degrades the image. Second,combiningtwo indexed
color images that use different palettes or even retouching
part of a single indexed color image creates problems
because of the limited number of available colors.
2. HARDWARE
Lights: All image processing applications start with some
form of illumination, typically light but more generallysome
form of energy. In some cases ambient light must be used,
but more typically the illumination can be designed for the
application. In such cases the battle is often won or lostright
here—no amount of cleversoftwarecanrecoverinformation
that simply isn’t there due to poor illumination. Generally
one can choose illumination intensity, direction, spectrum
(color), and continuous or strobe. Intensity is easiest to
choose and least important; any decent image processing
algorithm should be immune to significant variations in
contrast, although applications that Demand photometric
accuracy will require control and calibration of intensity.
Direction is harder to choose and more important, as any
professional photographer knows.
3. LINEAR FILTERS
Linear filters amplify or attenuate selected spatial
frequencies, can achieve such effects as smoothing and
sharpening. and usually form the basis of re-sampling and
boundary detection algorithms. Linear filters can be defined
by a convolution operation,whereoutputpixelsareobtained
by multiplying each neighborhood pixel by a corresponding
element of a like shaped set of values called a kernel, and
then summing those products.. Figure1. a, for example,
shows a rather noisy image of a cross within a circle.
Convolution with the smoothing (low pass) kernel of figure
1.3.6.2b produces figure 1.3.6.c. In this example the
neighborhood is 25 pixels arranged in a 5x5 square. Note
how high frequency noise has been attenuated, but at a cost
of some loss of edge sharpness. Note also that the kernel
elements sum to 1.0 for unity gain. The smoothing kernel of
figure 2b is a 2D Gaussian approximation.The2DGaussianis
among the most important functionsusedforlinearfiltering.
Its frequency response is also a Gaussian, which results in a
well-defined pass band and no ringing. Kernels that
approximate the difference of two Gaussiansofdifferentsize
make excellent band-pass and high-pass filters. Figure
3.1dillustrates the effect of a band-pass filter based on a
difference of Gaussian approximation using a 10x10 kernel.
Both the high frequency noise and the low frequency
uniform regions have been attenuated, leavingonlythemid-
frequency components of the edges.
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 04 Issue: 11 | Nov -2017 www.irjet.net p-ISSN: 2395-0072
© 2017, IRJET | Impact Factor value: 6.171 | ISO 9001:2008 Certified Journal | Page 1559
Figure-1: An image enhanced to reduce noise or
emphasize boundaries.
Physiological Factors Color is determined by an interaction
among three photo pigments; the perceived color is a
mixture of the relative responses of the red, green, and blue
photo pigments, in much the same way as a television
camera creates color [4]. Given a dramatic imbalanceamong
the percentages ofcellscontainingred(approximately 64%),
green (approximately 32%), and blue (approximately 2%)
photo pigments, it is clear that the perception ofcolorisboth
highly specialized and physiologically biased (data from
March, 1988).
Color results from the interaction of light with the nervous
system. There are several components that affect color
perception, including the eye lens, the retina, and a color
processing unit along the optic nerve. These areas are
discussed in the following sections.
(a) Lens
The function of the lens is to focus the incoming light on the
retina, which contains the photoreceptors. Different
wavelengths of light have different focal lengths. Therefore,
for pure hues, the lens must change its shapesothatthelight
is focused correctly. For a given lens curvature, longer
wavelengths have a longer focal length, i.e., redisthelongest
focal length and blue is the shortest. To have an image
focused on the retina, the lens curvature must change with
wavelength with red light requiring the greatest curvature
and blue light the least curvature. This means that if pure
blue and pure red hues are intermixed, the lens is constantly
changing shape and the eye becomes tired.
A related effect is called chromo here pure colors located at
the same distance from the eye appear to be at different
distances, e.g. reds appear closer and blues more distant.
Sometimes pure blues focus in front of the retina and so
appear unfocused. At night, a deep blue sign may appear
fuzzy while other colors appear sharp. The lens also absorbs
light about twice as much in the blue region as in the red
region. As people age the lens yellows, which means it
absorbs more in the shorter wavelengths. Therefore, the
result is that people are more sensitive to longer nets,
wavelengths (yellows and oranges) than they are to shorter
wavelengths (cyan to blue)andtheseincreaseswithage. The
fluid between the lens and the retina also absorb light and
this effect increases as people age,sotheolder peopleget the
less sensitive they are to light in general (the apparent
brightness level decreases) and especially the sensitivity to
blue decreases.
(b) Retina
The retina contains the photoreceptors that absorb photons
and transmit chemical signals to the brain. There are two
types: rods, which are night-vision receptors and have no
color dependency, and cones, which have color sensitivity
and require a higher level of light intensity than rods.
Figure-2: Spectral sensitivities of the three classes of
photoreceptors in the retina
As shown in Figure2. there are three types of photo-
pigments in the cones; "blue" with a maximum sensitivity at
430 nm, "green" with a maximum sensitivity at 530 nm, and
"red" at 560 nm. (This wavelength actually corresponds to
yellow). Light at a single wavelengthwill partiallyactivateall
three types of cones, e.g. at a wavelength of 470 nm, blue is
strongest plus some red and green components. The
percentage of cones is not equal but is as follows: blue (4%),
green (32%), and red (64%). In addition, the cones are
differentially distributed in the retina. The center of the
retina has a dense concentration of cones but no rods while
the periphery has many rods but few cones. The color
distribution is also asymmetrical. The center of the retinas
primarily green cones,surroundedbyred-yellowcones, with
the blue cones being mainly on the periphery. The center of
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 04 Issue: 11 | Nov -2017 www.irjet.net p-ISSN: 2395-0072
© 2017, IRJET | Impact Factor value: 6.171 | ISO 9001:2008 Certified Journal | Page 1560
the retina has no blue cones. Objects are seen by edge
detection, where an edge can be created by a difference in
color or brightness or both. Edges formed by color
differences alone, with no brightness differences, appear
fuzzy and unfocused, so changes in brightness should be
added to get sharp edges.
Photoreceptors adjust their sensitivity to the overall light
level, e.g. going into or out of a dark room require some
adjustment time. There is also a requiredminimumintensity
level for the photoreceptors to respond. This minimum
varies with wavelength with the highest sensitivity in the
center of the spectrum. Therefore, blues and reds must have
a higher intensity than greens or yellows in order to be
perceived.
(c) Brain
From the retina, the optic nerve (actually a collection of
nerves) connects to the brain but before it reaches thebrain,
there is a color-processing unit, called the lateral
geniculation body. This recombines the RGB color
information into three new channels as
Follows:
R-G gives red or green color perception
R+G gives the perception of brightness and yields yellow(Y)
Y-B gives yellow or blue color perception
Thus, blue plays no part in brightness so thatcolorsdiffering
only in amount of blue don't produce sharp edges. Also,note
that since blue and yellow and red & green are Linked
together it is impossible to experience combinationssuchas
reddish green or bluish yellow.
(d) Color blindness
About nine percent of the population has some kind of color
perception problem. The most common is red-green
deficiency, which can arise froma deficiencyof eitherthered
or the green photo-pigments. These people have difficulty
distinguishing any color that is dependent upon the red:
green ratio.
4. GENERAL GUIDELINES BASED ON PHYSIOLOGY
These are some of the guidelines drawn from Munch [5]
principles based on physiology. Avoid the simultaneous
display of highly saturated, spectrally extreme colors. This
causes the lens to rapidly change shape and thus tires the
eyes. Desiderate the colors or else use colors that are close
together in the spectrum.
Pure blue should be avoided for text, thin lines, and small
shapes. Since there are no blue cones in the center of the
retina, these are difficult to see. But blue makes an excellent
background color, e.g. for a computer display it tends to blur
the raster lines.
Avoid adjacent colors that differ only in the
amount of blue. Since blue doesnot Contributetobrightness,
this creates fuzzy edges.
Older operators need higher brightness levels to distinguish
colors.
Colors change in appearance as the ambient light level
changes.
The magnitude of a detectable change in color varies across
the spectrum.
It is difficult to focus upon edges created by color alone.
Avoid red and green in the periphery of large displays.
Opponent colors go well together.
For color-deficient observers, avoidsinglecolordistinctions.
Color selection guidelines based on human color vision
Avoid adjacent areas of strong blue and strong red in a
display to prevent unwanted depth effects(colorsappearing
to lie in different planes).
Never use the blue channel alone for fine detail such as text
or graphics. Do not use, for example, blue text on a black
background or yellow text on a white background.
Areas of strong color and high contrast can produce after
images when the viewer looks away from the screen,
resulting in visual stress from prolonged Viewing.
Do not use hue alone to encode information in applications
where serious consequences might ensue if a color-deficient
user were to make an incorrect Selection.
5. PSYCHOLOGICAL FACTORS
As well understood as the physiology of color is, this factor
provides little explanationforouropinionsofcolorandcolor
combinations. At the very least, opinionsofcolorarelearned
and highly associative. For example, as children, we often
had a “favorite color” and we liked everything: clothes, toys,
books that matched our preference. Over time, we learned a
variety of color schemes and in most cases, our tastes
become more refined. But even as adults, we are influenced
by fashion, and may still associate our more sophisticated
sense of color with increasingly more sophisticated
emotions, desires, or impressions. For example, even a
cursory examination of changes in interior design from the
1950s to the present reveals a dramatic evolution of what
was considered warm or even comfortable colour.
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 04 Issue: 11 | Nov -2017 www.irjet.net p-ISSN: 2395-0072
© 2017, IRJET | Impact Factor value: 6.171 | ISO 9001:2008 Certified Journal | Page 1561
6. ALGORITHM BACKGROUND
6.1 Color Space
Fig-3: 3 D representation of the human color space.
One can picture this space as a region in three-dimensional
Euclidean space if one identifies the x, y, and z axes with the
stimuli for the long-wavelength (L), medium-wavelength
(M), and short-wavelength (S) receptors. The origin, (S,M,L)
= (0,0,0), corresponds to black. White has no definite
position in this diagram; rather it is defined according to the
color temperature or whitebalanceasdesiredorasavailable
from ambient lighting. The human color space is a horse-
shoe-shaped cone such as shown here extending from the
origin to, in principle, infinity.
6.2 The CIE Color Model
Though some colors can be created by a single, pure
wavelength, most colors are the result of a mixture of
wavelengths. A French organization, the Commission
International de L'Eclairage (CIE), worked in the first half of
the 20th century developing a method for systematically
measuring color in relation to the wavelengths they contain.
This system became known as the CIE color model (or
system). The model was originally developed based on the
tristimulus theory of color perception. The theory is based
on the fact that our eyes contain three different types of
color receptors called cones. These three receptorsrespond
differently to different wavelengths of visible light. This
differential response of the three cones is measured in three
variables X, Y, and Z in the CIE color model.
Notice in Figure 3b that the perimeter edge marks the
wavelengths of visible light. Along thisedgewill bethe'pure'
spectral light colors. Other colors are developed by mixing
varying amountsofdifferent wavelengths.Noticethepurples
at the bottom do not have a wavelength associated with
them. non-spectral colors
Fig-4: CIE color model.
7. RESULT AND DISUSSION
7.1 Implementation of Color Coding Methods
The output response of the threecolorcodingtechniques are
plotted (using MATLAB) and are represented as Rainbow
Transform, Phase and Frequency Transform and HSV
Transform. Also the results are compared for the different
input values and outputs of three diffe methods.4a Example
1- Gray Scale Image ColorCoding Frequency Transform are
also plotted for different Input of this image for Rainbow
method Phase and Frequency Transform and HSV method
are plotted along with their Big view
A color model is an abstract mathematical model describing
the way colors can be represented as tulles of numbers,
typically as three or four values or color components. When
this model is associated with a precise description of how
the components are to be interpreted (viewing conditions,
etc.), the resulting set of The most saturated colors are
located at the outer rim of the region, with brighter colors
farther removed from the origin. The human tristimulus
space has the property that additive mixing of colors
corresponds to the adding of vectors in this space. This
makes it easy to, for example, describe the possible colors
(gamut) that can be constructed from the red, green, and
blue primaries in a computer display.colors is called color
space. This section describes ways in which human color
vision can be modeled.
6.1 Tristimulus color space
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 04 Issue: 11 | Nov -2017 www.irjet.net p-ISSN: 2395-0072
© 2017, IRJET | Impact Factor value: 6.171 | ISO 9001:2008 Certified Journal | Page 1562
Figure- 4(a): Input Gray Scale Image of Cell Figure -4(c): The Output Response of HSV Method for
Gray Scale Cell image.
The Image color coding methods are implemented on gray
scale image of Bag containing threat element having
specifications of phase (p) = 1.78 and frequency (f) = 8.29 .
The output response of this image for Rainbow method
Phase and Frequency Transform andHSVmethodareploted
alongwith their Big view. The different output response of
Phase and Frequency Transform are also plotted for
different Input Phase and Frequency parameters.
8. CONCLUSION
Among the different coloring schemes the HSV scheme that
was developed based on the color survey result was ranked
highest . However, the other color maps were ranked very
close to the HSV map. The cosine color map results were
impressive. The difficulty with the HSV map is to set or pick
the threshold. This can be solved by establishing an auto
thresholding algorithm. The cosinecolormapproducedvery
continuous and smooth results when compared to the other
maps. In addition, color-coding already enhanced images
may produce betterresults.Currentlycolor-codingisapplied
directly to the intensity stretched image. This work shows
that 90% of the edges are about the same in graylevel and in
color images whereas using color coding technique10% left
over edges may be extracted from the color images.
To acquire statistically significant results, the imagesshould
be presented in a pseudo random fashion to avoid the
influence of other images in detecting the threat. False
positives should be evaluatedbyintroducingimageswithout
any threat
Figure -4(b): The Output Response of Ranibow
Transform and Phase and Frequency Transform
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 04 Issue: 11 | Nov -2017 www.irjet.net p-ISSN: 2395-0072
© 2017, IRJET | Impact Factor value: 6.171 | ISO 9001:2008 Certified Journal | Page 1563
REFERENCES
[1] Kerr.S.T , “ Review and analysis of color coding research
for visual displays, Human factors”, (ERIC Document
Reproduction No –ED 285 545),Feburary 1987.
[2] Wright .P and Lickorish .A ," Color in document design”,
IEEE transactions on Professional Communication , 34(3),
180-185.
[3] Lawerence.J. Najjar ,"Using color effectively”, (IBM
TR52.0018). Atlanta, GA: IBM Corporation, January 1990.
[4] Faiola.T, “Principles and guidelines for screen display
interface: The Videodisc Monitor”,8(2) ,27-29.
[5] Horton.W ," A guide to the confident andappropriate use
of color”, IEEE transactions onProfessional Communication,
34(3), 160-171.
[6] Michael J. Swain ," Color Indexing” International Journal
of Computer Vision, 7:1, 11-32 , June 1991.
[7] Durrett.J ,"How to use color displays effectively “, The
elements of color vision and their implications,Pipeline7(2)
,13-16.
[8] Shneiderman .B ," Designing the user interface :The
strategies for effective human computer interaction “, 2nd
edition ,1992.
[9] H.Tang ,"Monochrome Image Representations and
Segmentation based on Psuedo-Color “,CA85594 (EXW)and
NIH R29 14715 (DG)., 1993.

More Related Content

PPSX
Image Processing Basics
PPT
An Introduction to Image Processing and Artificial Intelligence
PPT
Digital image processing
PPTX
Image proccessing and its application
PPTX
Digital image processing
DOCX
3.introduction onwards deepa
PDF
Image Processing
PDF
Basics of Image processing
Image Processing Basics
An Introduction to Image Processing and Artificial Intelligence
Digital image processing
Image proccessing and its application
Digital image processing
3.introduction onwards deepa
Image Processing
Basics of Image processing

What's hot (19)

PPTX
Application of image processing
PPTX
Cse image processing ppt
DOCX
Image processing
PPTX
B tech vi sem cse dip lecture 1
PPTX
Image processing sw & hw
PPTX
Image processing ppt
PPTX
Digital image processing
PPTX
Fundamental steps in image processing
PDF
Improving image resolution through the cra algorithm involved recycling proce...
PPT
Image processing
PPTX
Digital image processing
PDF
A review on image enhancement techniques
PPTX
Introduction to Digital Image Processing
PDF
Image processing fundamentals
PPT
Digital Image Processing
PDF
P180203105108
PPT
Image Processing
PPTX
Digital image processing
PPT
Digital image processing ppt
Application of image processing
Cse image processing ppt
Image processing
B tech vi sem cse dip lecture 1
Image processing sw & hw
Image processing ppt
Digital image processing
Fundamental steps in image processing
Improving image resolution through the cra algorithm involved recycling proce...
Image processing
Digital image processing
A review on image enhancement techniques
Introduction to Digital Image Processing
Image processing fundamentals
Digital Image Processing
P180203105108
Image Processing
Digital image processing
Digital image processing ppt
Ad

Similar to Intensity Enhancement in Gray Level Images using HSV Color Coding Technique (20)

PPT
image Processing Fundamental Is .ppt
PPT
Image Processing Fundamentals .ppt
PPTX
DIP PPT (1).pptx
PPTX
Image Formation and Represantation and Transformation
PPTX
AI Unit-5 Image Processing for all ML problems
PPTX
Image enhancement lecture
PDF
Human age and gender prediction management system project report.pdf
PPTX
Noise recognition in digital image
PPTX
Noise recognition in digital image
PPTX
Chapter-1 Digital Image Processing (DIP)
PDF
Design and Implementation of Digital Image Transformation Algorithms
PPT
Image enhancement ppt nal2
PPTX
Introduction to Medical Imaging Applications
PDF
Basics of image processing & analysis
PPTX
DIP-CHAPTERs
PDF
digital image processing colour_images.pdf
PDF
digital image processing, image processing
PPT
introduction to Digital Image Processing
PPTX
Digital Image Processing fundamentals.pptx
PDF
Cq32579584
image Processing Fundamental Is .ppt
Image Processing Fundamentals .ppt
DIP PPT (1).pptx
Image Formation and Represantation and Transformation
AI Unit-5 Image Processing for all ML problems
Image enhancement lecture
Human age and gender prediction management system project report.pdf
Noise recognition in digital image
Noise recognition in digital image
Chapter-1 Digital Image Processing (DIP)
Design and Implementation of Digital Image Transformation Algorithms
Image enhancement ppt nal2
Introduction to Medical Imaging Applications
Basics of image processing & analysis
DIP-CHAPTERs
digital image processing colour_images.pdf
digital image processing, image processing
introduction to Digital Image Processing
Digital Image Processing fundamentals.pptx
Cq32579584
Ad

More from IRJET Journal (20)

PDF
Enhanced heart disease prediction using SKNDGR ensemble Machine Learning Model
PDF
Utilizing Biomedical Waste for Sustainable Brick Manufacturing: A Novel Appro...
PDF
Kiona – A Smart Society Automation Project
PDF
DESIGN AND DEVELOPMENT OF BATTERY THERMAL MANAGEMENT SYSTEM USING PHASE CHANG...
PDF
Invest in Innovation: Empowering Ideas through Blockchain Based Crowdfunding
PDF
SPACE WATCH YOUR REAL-TIME SPACE INFORMATION HUB
PDF
A Review on Influence of Fluid Viscous Damper on The Behaviour of Multi-store...
PDF
Wireless Arduino Control via Mobile: Eliminating the Need for a Dedicated Wir...
PDF
Explainable AI(XAI) using LIME and Disease Detection in Mango Leaf by Transfe...
PDF
BRAIN TUMOUR DETECTION AND CLASSIFICATION
PDF
The Project Manager as an ambassador of the contract. The case of NEC4 ECC co...
PDF
"Enhanced Heat Transfer Performance in Shell and Tube Heat Exchangers: A CFD ...
PDF
Advancements in CFD Analysis of Shell and Tube Heat Exchangers with Nanofluid...
PDF
Breast Cancer Detection using Computer Vision
PDF
Auto-Charging E-Vehicle with its battery Management.
PDF
Analysis of high energy charge particle in the Heliosphere
PDF
A Novel System for Recommending Agricultural Crops Using Machine Learning App...
PDF
Auto-Charging E-Vehicle with its battery Management.
PDF
Analysis of high energy charge particle in the Heliosphere
PDF
Wireless Arduino Control via Mobile: Eliminating the Need for a Dedicated Wir...
Enhanced heart disease prediction using SKNDGR ensemble Machine Learning Model
Utilizing Biomedical Waste for Sustainable Brick Manufacturing: A Novel Appro...
Kiona – A Smart Society Automation Project
DESIGN AND DEVELOPMENT OF BATTERY THERMAL MANAGEMENT SYSTEM USING PHASE CHANG...
Invest in Innovation: Empowering Ideas through Blockchain Based Crowdfunding
SPACE WATCH YOUR REAL-TIME SPACE INFORMATION HUB
A Review on Influence of Fluid Viscous Damper on The Behaviour of Multi-store...
Wireless Arduino Control via Mobile: Eliminating the Need for a Dedicated Wir...
Explainable AI(XAI) using LIME and Disease Detection in Mango Leaf by Transfe...
BRAIN TUMOUR DETECTION AND CLASSIFICATION
The Project Manager as an ambassador of the contract. The case of NEC4 ECC co...
"Enhanced Heat Transfer Performance in Shell and Tube Heat Exchangers: A CFD ...
Advancements in CFD Analysis of Shell and Tube Heat Exchangers with Nanofluid...
Breast Cancer Detection using Computer Vision
Auto-Charging E-Vehicle with its battery Management.
Analysis of high energy charge particle in the Heliosphere
A Novel System for Recommending Agricultural Crops Using Machine Learning App...
Auto-Charging E-Vehicle with its battery Management.
Analysis of high energy charge particle in the Heliosphere
Wireless Arduino Control via Mobile: Eliminating the Need for a Dedicated Wir...

Recently uploaded (20)

PDF
Well-logging-methods_new................
PDF
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
PPTX
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
PPTX
bas. eng. economics group 4 presentation 1.pptx
PDF
R24 SURVEYING LAB MANUAL for civil enggi
PDF
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
PDF
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
PDF
PPT on Performance Review to get promotions
PDF
Automation-in-Manufacturing-Chapter-Introduction.pdf
PDF
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
PPTX
Lecture Notes Electrical Wiring System Components
PPTX
UNIT 4 Total Quality Management .pptx
PPT
Mechanical Engineering MATERIALS Selection
DOCX
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
PPTX
Construction Project Organization Group 2.pptx
PDF
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
PPTX
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
PPTX
Sustainable Sites - Green Building Construction
PDF
composite construction of structures.pdf
DOCX
573137875-Attendance-Management-System-original
Well-logging-methods_new................
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
bas. eng. economics group 4 presentation 1.pptx
R24 SURVEYING LAB MANUAL for civil enggi
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
PPT on Performance Review to get promotions
Automation-in-Manufacturing-Chapter-Introduction.pdf
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
Lecture Notes Electrical Wiring System Components
UNIT 4 Total Quality Management .pptx
Mechanical Engineering MATERIALS Selection
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
Construction Project Organization Group 2.pptx
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
Sustainable Sites - Green Building Construction
composite construction of structures.pdf
573137875-Attendance-Management-System-original

Intensity Enhancement in Gray Level Images using HSV Color Coding Technique

  • 1. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 04 Issue: 11 | Nov -2017 www.irjet.net p-ISSN: 2395-0072 © 2017, IRJET | Impact Factor value: 5.181 | ISO 9001:2008 Certified Journal | Page 1557 INTENSITY ENHANCEMENT IN GRAY LEVEL IMAGES USING HSV COLOR CODING TECHNIQUE Er.Harteesh Kumar1, Dr.Rahul Malhotra2, Er.Ashish Grover3 1M.Tech (Student), Department Of E.C.E GTBKIET Chhapianwali, Malout, PUNJAB, INDIA 2DIRECTOR & PRINCIPAL, GTBKIET Chhapianwali, Malout, PUNJAB, INDIA 3Assistant Professor, Department Of E.C.E GTBKIET Chhapianwali, Malout, PUNJAB, INDIA ---------------------------------------------------------------------***--------------------------------------------------------------------- Abstract The capability will increase the efficiency by decreasing the time required to visualize the image clearly and reduce the probability of errors due tofatigue.Thework in this thesis is motivated from the practical point ofview by several shortcomings of traditional methods. The first problem is the inability of all known traditional methods to properly segment the objects from the background without the interference from object shadows and highlights. Moreover there is inadequate research on the combination of hue- and intensity-based similarity measures to improve color. The ineffective use of color can degrade anapplication reference and lesson user application. TheHSVcolorspaceis quite similar to the way in which humansperceivecolor.The other models define color in relation to the primary colors. RGB is not efficient since it uses equal bandwidth for each color component. However, human eye is more sensitive to the luminance component than the color component. Thus, many image coding standards use HSV color scheme. Key Words: HSV, Pixels, RGB, CIE, Gray Scale Image 1.INTRODUCTION This introduction is a practical guide to the challenges, and the hardware and algorithms used to meet them. Image processing modifies pictures to improve them (enhancement, restoration), extract the information (analysis, recognition), and change their structure (composition, image editing). Images can be processed by optical, photographic, and electronic means, but image processing using digital computers is the most common method because digital methods are fast, flexible, and precise. An image can be synthesized from a micrograph of various cell organelles by assigning alight intensity value to each cell organelle. The sensor signal is “digitized”-- converted to an array of numerical values, each value representing the light intensity of a small area ofthecell.The digitized values are called picture elements, or “pixels,” and are stored in computer memory as a digital image. A typical size for a digital image is an array of 512 by 512 pixels, where each pixel has value in the range of 0 to 255. The digital image is processed by a computer to achieve the desired result. Image enhancement improves the quality (clarity) of images for human viewing. Removing blurring and noise, increasing contrast, and revealing details are examples of enhancement operations. Reducing the noise and blurring and increasing the contrast range could enhance the image. The original image might have areas of very high and very low intensity, which mask details. An adaptive enhancement algorithm reveals these details. Adaptive algorithms adjust their operation based on the image information (pixels) being processed. In this case the mean intensity, contrast, and sharpness (amount of blur removal) could be adjusted based on the pixel intensity statistics in various areas of the image. Images are produced by a variety of physical devices, including still and video cameras, x-ray devices, electron microscopes, radar, and ultrasound, and used for a variety of purposes, including entertainment, medical, business (e.g. documents), industrial, military, civil (e.g. traffic),security, and scientific. The goal in each case is for an observer, human or machine, to extract useful information about the scene being imaged. Often the raw image is not directly suitable for this purpose, and must be processed in some way. Such processing is called image enhancement; processing by an observer to extract information is called image analysis. Enhancement and analysis are distinguished by their output, images vs. scene information, and by the challenges faced and methods employed. Image enhancement has been done by chemical, optical, and the electronic means, while analysis has been done mostly by humans and electronically. Digital image processing is a subset of the electronic domain wherein the image is converted to an array of small integers,called pixels, representing a physical quantity such as scene radiance, stored in a digital memory, and processed by computer or other Digital hardware. Digital image processing, either as enhancement for human observers or performing autonomous analysis, offers advantages in cost, speed, and flexibility, and with the rapidly falling price and rising performance of personal computers it has become the dominant method in use 1.1Concept Of Image Perhaps the first guiding principal is that humans are better at judgment and machines are better atmeasurement.Along these lines image enhancement, which generally requires lots of numeric computation but little judgments, is well- suited for digital processing. On top of this many digital image processingapplicationsareconstrainedbysevere cost targets. Thus we often face the engineer’s dreaded triple
  • 2. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 04 Issue: 11 | Nov -2017 www.irjet.net p-ISSN: 2395-0072 © 2017, IRJET | Impact Factor value: 6.171 | ISO 9001:2008 Certified Journal | Page 1558 curse, the need to design something good, fast, and cheap all at once. 1.1.1 Types of Digital Images There are two important types of digital images—color and black and white. Color images are made up of colored pixels while black and white images are made of pixels in different shades of gray. (a) Black and White Images A black and white image is made up of pixels each of which holds a single number corresponding to the gray level of the image at a particular location. These gray levels span the full range from black to white in a series of very fine steps, normally 256 different grays. Since the eye can barely distinguish about 200 different gray levels, this is enough to give the illusion of a steeples tonal scale .Assuming 256 gray levels, each black and white pixel can be stored in a single byte (8 bits) of memory. (b) Color Images A color image is made up of pixels each of which holds three numbers corresponding to the red, green, and blue levels of the image at a particular location. Red, green, and blue (sometimes referred to as RGB) are the primary colors for mixing light-these so-called additive primary colors are different from thesubtractiveprimarycolorsusedformixing paints (cyan, magenta, and yellow). Any color canbecreated by mixing the correct amounts of red, green, and blue light. Assuming 256 levels for each primary, each color pixel can be stored in three bytes (24 bits) of memory. This corresponds to roughly16.7 milliondifferentpossiblecolors. Note that for images of the same size, a black and white version will use three times less memory than a color version. (c) Binary or Bi-level Images Binary images use only a single bit to represent each pixel. Since a bit can only exist in two states—on or off, every pixel in a binary image must be one of two colors, usually black or white. This inabilitytorepresentintermediateshadesof gray is what limits their usefulness in dealing with photographic images. (d) Indexed Color Images Some color images are created using a limited palette of colors, typically 256 different colors. These images are referred to as indexed colorimagesbecausethedata foreach pixel consists of a palette indexindicating whichofthecolors in the palette applies to that pixel. There are several problems with using indexed color to represent photographic images. First, if the image contains more different colors than are in the palette, techniques such as dithering must be applied to represent the missing colors and this degrades the image. Second,combiningtwo indexed color images that use different palettes or even retouching part of a single indexed color image creates problems because of the limited number of available colors. 2. HARDWARE Lights: All image processing applications start with some form of illumination, typically light but more generallysome form of energy. In some cases ambient light must be used, but more typically the illumination can be designed for the application. In such cases the battle is often won or lostright here—no amount of cleversoftwarecanrecoverinformation that simply isn’t there due to poor illumination. Generally one can choose illumination intensity, direction, spectrum (color), and continuous or strobe. Intensity is easiest to choose and least important; any decent image processing algorithm should be immune to significant variations in contrast, although applications that Demand photometric accuracy will require control and calibration of intensity. Direction is harder to choose and more important, as any professional photographer knows. 3. LINEAR FILTERS Linear filters amplify or attenuate selected spatial frequencies, can achieve such effects as smoothing and sharpening. and usually form the basis of re-sampling and boundary detection algorithms. Linear filters can be defined by a convolution operation,whereoutputpixelsareobtained by multiplying each neighborhood pixel by a corresponding element of a like shaped set of values called a kernel, and then summing those products.. Figure1. a, for example, shows a rather noisy image of a cross within a circle. Convolution with the smoothing (low pass) kernel of figure 1.3.6.2b produces figure 1.3.6.c. In this example the neighborhood is 25 pixels arranged in a 5x5 square. Note how high frequency noise has been attenuated, but at a cost of some loss of edge sharpness. Note also that the kernel elements sum to 1.0 for unity gain. The smoothing kernel of figure 2b is a 2D Gaussian approximation.The2DGaussianis among the most important functionsusedforlinearfiltering. Its frequency response is also a Gaussian, which results in a well-defined pass band and no ringing. Kernels that approximate the difference of two Gaussiansofdifferentsize make excellent band-pass and high-pass filters. Figure 3.1dillustrates the effect of a band-pass filter based on a difference of Gaussian approximation using a 10x10 kernel. Both the high frequency noise and the low frequency uniform regions have been attenuated, leavingonlythemid- frequency components of the edges.
  • 3. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 04 Issue: 11 | Nov -2017 www.irjet.net p-ISSN: 2395-0072 © 2017, IRJET | Impact Factor value: 6.171 | ISO 9001:2008 Certified Journal | Page 1559 Figure-1: An image enhanced to reduce noise or emphasize boundaries. Physiological Factors Color is determined by an interaction among three photo pigments; the perceived color is a mixture of the relative responses of the red, green, and blue photo pigments, in much the same way as a television camera creates color [4]. Given a dramatic imbalanceamong the percentages ofcellscontainingred(approximately 64%), green (approximately 32%), and blue (approximately 2%) photo pigments, it is clear that the perception ofcolorisboth highly specialized and physiologically biased (data from March, 1988). Color results from the interaction of light with the nervous system. There are several components that affect color perception, including the eye lens, the retina, and a color processing unit along the optic nerve. These areas are discussed in the following sections. (a) Lens The function of the lens is to focus the incoming light on the retina, which contains the photoreceptors. Different wavelengths of light have different focal lengths. Therefore, for pure hues, the lens must change its shapesothatthelight is focused correctly. For a given lens curvature, longer wavelengths have a longer focal length, i.e., redisthelongest focal length and blue is the shortest. To have an image focused on the retina, the lens curvature must change with wavelength with red light requiring the greatest curvature and blue light the least curvature. This means that if pure blue and pure red hues are intermixed, the lens is constantly changing shape and the eye becomes tired. A related effect is called chromo here pure colors located at the same distance from the eye appear to be at different distances, e.g. reds appear closer and blues more distant. Sometimes pure blues focus in front of the retina and so appear unfocused. At night, a deep blue sign may appear fuzzy while other colors appear sharp. The lens also absorbs light about twice as much in the blue region as in the red region. As people age the lens yellows, which means it absorbs more in the shorter wavelengths. Therefore, the result is that people are more sensitive to longer nets, wavelengths (yellows and oranges) than they are to shorter wavelengths (cyan to blue)andtheseincreaseswithage. The fluid between the lens and the retina also absorb light and this effect increases as people age,sotheolder peopleget the less sensitive they are to light in general (the apparent brightness level decreases) and especially the sensitivity to blue decreases. (b) Retina The retina contains the photoreceptors that absorb photons and transmit chemical signals to the brain. There are two types: rods, which are night-vision receptors and have no color dependency, and cones, which have color sensitivity and require a higher level of light intensity than rods. Figure-2: Spectral sensitivities of the three classes of photoreceptors in the retina As shown in Figure2. there are three types of photo- pigments in the cones; "blue" with a maximum sensitivity at 430 nm, "green" with a maximum sensitivity at 530 nm, and "red" at 560 nm. (This wavelength actually corresponds to yellow). Light at a single wavelengthwill partiallyactivateall three types of cones, e.g. at a wavelength of 470 nm, blue is strongest plus some red and green components. The percentage of cones is not equal but is as follows: blue (4%), green (32%), and red (64%). In addition, the cones are differentially distributed in the retina. The center of the retina has a dense concentration of cones but no rods while the periphery has many rods but few cones. The color distribution is also asymmetrical. The center of the retinas primarily green cones,surroundedbyred-yellowcones, with the blue cones being mainly on the periphery. The center of
  • 4. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 04 Issue: 11 | Nov -2017 www.irjet.net p-ISSN: 2395-0072 © 2017, IRJET | Impact Factor value: 6.171 | ISO 9001:2008 Certified Journal | Page 1560 the retina has no blue cones. Objects are seen by edge detection, where an edge can be created by a difference in color or brightness or both. Edges formed by color differences alone, with no brightness differences, appear fuzzy and unfocused, so changes in brightness should be added to get sharp edges. Photoreceptors adjust their sensitivity to the overall light level, e.g. going into or out of a dark room require some adjustment time. There is also a requiredminimumintensity level for the photoreceptors to respond. This minimum varies with wavelength with the highest sensitivity in the center of the spectrum. Therefore, blues and reds must have a higher intensity than greens or yellows in order to be perceived. (c) Brain From the retina, the optic nerve (actually a collection of nerves) connects to the brain but before it reaches thebrain, there is a color-processing unit, called the lateral geniculation body. This recombines the RGB color information into three new channels as Follows: R-G gives red or green color perception R+G gives the perception of brightness and yields yellow(Y) Y-B gives yellow or blue color perception Thus, blue plays no part in brightness so thatcolorsdiffering only in amount of blue don't produce sharp edges. Also,note that since blue and yellow and red & green are Linked together it is impossible to experience combinationssuchas reddish green or bluish yellow. (d) Color blindness About nine percent of the population has some kind of color perception problem. The most common is red-green deficiency, which can arise froma deficiencyof eitherthered or the green photo-pigments. These people have difficulty distinguishing any color that is dependent upon the red: green ratio. 4. GENERAL GUIDELINES BASED ON PHYSIOLOGY These are some of the guidelines drawn from Munch [5] principles based on physiology. Avoid the simultaneous display of highly saturated, spectrally extreme colors. This causes the lens to rapidly change shape and thus tires the eyes. Desiderate the colors or else use colors that are close together in the spectrum. Pure blue should be avoided for text, thin lines, and small shapes. Since there are no blue cones in the center of the retina, these are difficult to see. But blue makes an excellent background color, e.g. for a computer display it tends to blur the raster lines. Avoid adjacent colors that differ only in the amount of blue. Since blue doesnot Contributetobrightness, this creates fuzzy edges. Older operators need higher brightness levels to distinguish colors. Colors change in appearance as the ambient light level changes. The magnitude of a detectable change in color varies across the spectrum. It is difficult to focus upon edges created by color alone. Avoid red and green in the periphery of large displays. Opponent colors go well together. For color-deficient observers, avoidsinglecolordistinctions. Color selection guidelines based on human color vision Avoid adjacent areas of strong blue and strong red in a display to prevent unwanted depth effects(colorsappearing to lie in different planes). Never use the blue channel alone for fine detail such as text or graphics. Do not use, for example, blue text on a black background or yellow text on a white background. Areas of strong color and high contrast can produce after images when the viewer looks away from the screen, resulting in visual stress from prolonged Viewing. Do not use hue alone to encode information in applications where serious consequences might ensue if a color-deficient user were to make an incorrect Selection. 5. PSYCHOLOGICAL FACTORS As well understood as the physiology of color is, this factor provides little explanationforouropinionsofcolorandcolor combinations. At the very least, opinionsofcolorarelearned and highly associative. For example, as children, we often had a “favorite color” and we liked everything: clothes, toys, books that matched our preference. Over time, we learned a variety of color schemes and in most cases, our tastes become more refined. But even as adults, we are influenced by fashion, and may still associate our more sophisticated sense of color with increasingly more sophisticated emotions, desires, or impressions. For example, even a cursory examination of changes in interior design from the 1950s to the present reveals a dramatic evolution of what was considered warm or even comfortable colour.
  • 5. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 04 Issue: 11 | Nov -2017 www.irjet.net p-ISSN: 2395-0072 © 2017, IRJET | Impact Factor value: 6.171 | ISO 9001:2008 Certified Journal | Page 1561 6. ALGORITHM BACKGROUND 6.1 Color Space Fig-3: 3 D representation of the human color space. One can picture this space as a region in three-dimensional Euclidean space if one identifies the x, y, and z axes with the stimuli for the long-wavelength (L), medium-wavelength (M), and short-wavelength (S) receptors. The origin, (S,M,L) = (0,0,0), corresponds to black. White has no definite position in this diagram; rather it is defined according to the color temperature or whitebalanceasdesiredorasavailable from ambient lighting. The human color space is a horse- shoe-shaped cone such as shown here extending from the origin to, in principle, infinity. 6.2 The CIE Color Model Though some colors can be created by a single, pure wavelength, most colors are the result of a mixture of wavelengths. A French organization, the Commission International de L'Eclairage (CIE), worked in the first half of the 20th century developing a method for systematically measuring color in relation to the wavelengths they contain. This system became known as the CIE color model (or system). The model was originally developed based on the tristimulus theory of color perception. The theory is based on the fact that our eyes contain three different types of color receptors called cones. These three receptorsrespond differently to different wavelengths of visible light. This differential response of the three cones is measured in three variables X, Y, and Z in the CIE color model. Notice in Figure 3b that the perimeter edge marks the wavelengths of visible light. Along thisedgewill bethe'pure' spectral light colors. Other colors are developed by mixing varying amountsofdifferent wavelengths.Noticethepurples at the bottom do not have a wavelength associated with them. non-spectral colors Fig-4: CIE color model. 7. RESULT AND DISUSSION 7.1 Implementation of Color Coding Methods The output response of the threecolorcodingtechniques are plotted (using MATLAB) and are represented as Rainbow Transform, Phase and Frequency Transform and HSV Transform. Also the results are compared for the different input values and outputs of three diffe methods.4a Example 1- Gray Scale Image ColorCoding Frequency Transform are also plotted for different Input of this image for Rainbow method Phase and Frequency Transform and HSV method are plotted along with their Big view A color model is an abstract mathematical model describing the way colors can be represented as tulles of numbers, typically as three or four values or color components. When this model is associated with a precise description of how the components are to be interpreted (viewing conditions, etc.), the resulting set of The most saturated colors are located at the outer rim of the region, with brighter colors farther removed from the origin. The human tristimulus space has the property that additive mixing of colors corresponds to the adding of vectors in this space. This makes it easy to, for example, describe the possible colors (gamut) that can be constructed from the red, green, and blue primaries in a computer display.colors is called color space. This section describes ways in which human color vision can be modeled. 6.1 Tristimulus color space
  • 6. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 04 Issue: 11 | Nov -2017 www.irjet.net p-ISSN: 2395-0072 © 2017, IRJET | Impact Factor value: 6.171 | ISO 9001:2008 Certified Journal | Page 1562 Figure- 4(a): Input Gray Scale Image of Cell Figure -4(c): The Output Response of HSV Method for Gray Scale Cell image. The Image color coding methods are implemented on gray scale image of Bag containing threat element having specifications of phase (p) = 1.78 and frequency (f) = 8.29 . The output response of this image for Rainbow method Phase and Frequency Transform andHSVmethodareploted alongwith their Big view. The different output response of Phase and Frequency Transform are also plotted for different Input Phase and Frequency parameters. 8. CONCLUSION Among the different coloring schemes the HSV scheme that was developed based on the color survey result was ranked highest . However, the other color maps were ranked very close to the HSV map. The cosine color map results were impressive. The difficulty with the HSV map is to set or pick the threshold. This can be solved by establishing an auto thresholding algorithm. The cosinecolormapproducedvery continuous and smooth results when compared to the other maps. In addition, color-coding already enhanced images may produce betterresults.Currentlycolor-codingisapplied directly to the intensity stretched image. This work shows that 90% of the edges are about the same in graylevel and in color images whereas using color coding technique10% left over edges may be extracted from the color images. To acquire statistically significant results, the imagesshould be presented in a pseudo random fashion to avoid the influence of other images in detecting the threat. False positives should be evaluatedbyintroducingimageswithout any threat Figure -4(b): The Output Response of Ranibow Transform and Phase and Frequency Transform
  • 7. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 04 Issue: 11 | Nov -2017 www.irjet.net p-ISSN: 2395-0072 © 2017, IRJET | Impact Factor value: 6.171 | ISO 9001:2008 Certified Journal | Page 1563 REFERENCES [1] Kerr.S.T , “ Review and analysis of color coding research for visual displays, Human factors”, (ERIC Document Reproduction No –ED 285 545),Feburary 1987. [2] Wright .P and Lickorish .A ," Color in document design”, IEEE transactions on Professional Communication , 34(3), 180-185. [3] Lawerence.J. Najjar ,"Using color effectively”, (IBM TR52.0018). Atlanta, GA: IBM Corporation, January 1990. [4] Faiola.T, “Principles and guidelines for screen display interface: The Videodisc Monitor”,8(2) ,27-29. [5] Horton.W ," A guide to the confident andappropriate use of color”, IEEE transactions onProfessional Communication, 34(3), 160-171. [6] Michael J. Swain ," Color Indexing” International Journal of Computer Vision, 7:1, 11-32 , June 1991. [7] Durrett.J ,"How to use color displays effectively “, The elements of color vision and their implications,Pipeline7(2) ,13-16. [8] Shneiderman .B ," Designing the user interface :The strategies for effective human computer interaction “, 2nd edition ,1992. [9] H.Tang ,"Monochrome Image Representations and Segmentation based on Psuedo-Color “,CA85594 (EXW)and NIH R29 14715 (DG)., 1993.