SlideShare a Scribd company logo
2
Most read
International Journal of Engineering Research and Development
e-ISSN: 2278-067X, p-ISSN: 2278-800X, www.ijerd.com
Volume 13, Issue 3 (March 2017), PP.25-35
25
Automated Crop Inspection and Pest Control Using Image
Processing
J.Mahalakshmi1, Pg Student, G.Shanthakumari2,
1
sri Sairam Engineering College,Chennai, India.
2
Asst Professor, Department Of Ece, Sri Sairam Engineering College, Chennai, India.
ABSTRACT: Agriculture is the backbone of our country. India is an agricultural country where the most of
the population depends on agriculture. Research in agriculture is aimed towards increasing productivity and
profit. There are several automated systems available in literature, which are developed for irrigation control and
environmental monitoring in the field. However, it is essential to monitor the plant growth stage by stage and
take decisions accordingly. In addition to monitoring the environmental parameters such as pH, moisture
content and temperature, it is inevitable to identify the onset of plant diseases too. It is the key to prevent the
losses in yield and quantity of agricultural product. Plant disease identification by continuous visual monitoring
is very difficult task to farmers and at the same time it is less accurate and can be done in limited areas. Hence
this projects aims at developing an image processing algorithm to identify the diseases in rice plant. Rice blast
disease occurring in rice plant is due to magnaporthe grisea and this disease also occurs in wheat, rye, barley,
pearl and millet. Due to rice blast disease, 60 million people are affected in 85 countries worldwide. Image
processing technique is adopted as it is more accurate. Early disease detection can increase the crop production
by inducing proper pesticide usage.
I. INTRODUCTION
Hardware prototype of the proposed system will be developed by using Arm processor. The
agricultural sector plays an important role in economic development by providing rural employment.
Management of paddy plant from early stage to nature harvest state increases the yield. Paddy is one of the
nation’s most important products as it is considered as to be one of India stable food and cereal crops and
because of that, many efforts have taken to ensure its safety, one of them is monitoring of paddy plant. Paddy
plants are affected by various fungi. This work focus on recognizing paddy plant disease namely rice blast
disease. The proper detection and recognizing of the disease is very important for applying fertilizer. There will
be a decline in the production, if the diseases are not recognized at early stage. The main goal of this work is to
build up an image recognition system that identifies the paddy plant diseases affecting the cultivation of paddy.
The system also monitors the environmental parameters. The paddy leaf image is given as input image. To
reduce correlated colour information, convert RGB image to gray scale. Morphological process consists of
erosion, dilation, opening and closing operation. Apply morphological opening operation on a gray scale paddy
leaf image to reduce the small noise. Morphological closing operation is used to smooth the leaf structure and to
fuse the narrow breaks.
From the process, it leads to thresholding calculation. In this image segmentation is done here. After this
process, the infected region of the paddy leaf is found.
There are four main steps used for the detection of plant leaf diseases:
1) RGB image acquisition.
2) Convert the input image into gray scale
3) Segment the component
4) Obtain the useful segment
In Image Processing, whose goal is to investigate the history of an image using passive approaches, has
emerged as a possible way to solve the above crisis. The basic idea underlying Image Processing is that most, if
not all, image processing tools leave some (usually imperceptible) traces into the processed image, and hence
the presence of these traces can be investigated in order to understand whether the image has undergone some
kind of processing or not. In the last years many algorithms for detecting different kinds of traces have been
proposed which usually extract a set of features from the image and use them to classify the content as exposing
the trace or not. Computerized analysis of medical images is gaining importance day by day, as it often produces
higher sensitivity irrespective of experience of the analyst. For this reason various image analysis techniques are
used for automatic detection of various diseases in Paddy. Various methods have been developed for detection
Automated Crop Inspection And Pest Control Using Image Processing
26
of exudates. These include thresholding and edge detection based techniques, FCM based approach, gray level
variation based approach, multilayer perception based approach. Two principal research paths evolve under the
name of Digital Image Processing. The first one includes methods that attempt at answering question, was the
image captured by the device it is claimed to be acquired with? By performing some kind of ballistic analysis to
identify the device that captured the image or at least to determine which devices did not capture it.
The history of a digital image can be represented as a composition of several steps, collected into three
main phases: acquisition, coding, and editing. These methods will be collected in the following under the
common name of image source device identification techniques. The second group of methods aims instead at
exposing traces of semantic manipulation (i.e. forgeries) by studying inconsistencies in natural image statistics.
Digital image processing allows one to enhance image features of interest while attenuating detail
irrelevant to a given application, and then extract useful information about the scene from the enhanced image.
This introduction is a practical guide to the challenges, and the hardware and algorithms used to meet them.
Images are produced by a variety of physical devices.Such processing is called image enhancement; processing
by an observer to extract information is called image analysis. Enhancement and analysis are distinguished by
their output, images Vs scene information, and by the challenges faced and methods employed. Image
enhancement has been done by chemical, optical, and electronic means, while analysis has been done mostly by
humans and electronically. Digital image processing is a subset of the electronic domain wherein the image is
converted to an array of small integers called pixels, representing a physical quantity such as scene radiance
stored in a digital memory, and processed by computer or other digital hardware. Digital image processing,
either as enhancement for human observers or performing autonomous analysis, offers advantages in cost,
speed, and flexibility, and with the rapidly falling price and rising performance of personal computers it has
become the dominant method in use
Fig 1. Block Diagram
Paddy: A paddy field is a flooded parcel of arable land used for rowing semiaquatic rice. Paddy
cultivation should not be confused with cultivation of deep water rice, which is grown in flooded conditions
with water more than 50 cm (20 in) deep for at least a month. Genetic evidence shows that all forms of paddy
rice, both indica and japonica, spring from a domestication of the wild rice Oryza rufipogon that first occurred
8,200– 13,500 years ago South of the Yangtze River in present-day China.
However, the domesticated indica subspecies currently appears to be a product of the introgression of
favorable alleles from japonica at a later date, so that there are possibly several events of cultivation and
domestication. Paddy fields are the typical feature of rice farming in east, southand southeast Asia. Fields can be
built into steep hillsides as terraces and adjacent to depressed or steeply sloped features such as rivers or
marshes. They can require a great deal of labor and materials to create, and need large quantities of water for
irrigation. Oxen and water buffalo, adapted for life in wetlands, are important working animals used extensively
in paddy field farming.
During the 20th century, paddy-field farming became the dominant form of growing rice. Hill tribes of
Thailand still cultivate dry-soil varieties called upland rice. Paddy field farming is practiced in Cambodia,
Bangladesh, China, Taiwan, India, Indonesia, Iran, Japan, North Korea, South Korea, Malaysia, Myanmar,
Nepal, Pakistan, the Philippines, Sri Lanka, Thailand, Vietnam, and Laos, Northern Italy, the Camargue in
France, the Artibonite Valley in Haiti, and Sacramento Valley in California. Paddy fields are a major source of
atmospheric methane and have been estimated to contribute in the range of 50 to 100 million tonnes of the gas
per annum. Studies have shown that this can be significantly reduced while also boosting crop yield by draining
the paddies to allow the soil to aerate to interrupt methane production, Studies have also shown the variability in
assessment of methane emission using local, regional and global factors and calling for better inventorisation
based on micro level data.
Automated Crop Inspection And Pest Control Using Image Processing
27
Table 1 Types of Diseases
Nursery diseases Main field diseases
Blast –PyriculariaBrown spot –
grisea(P.oryzae) Helminthosporium oryzae
Bacterial Leaf Blight –Sheath Rot- Sarocladium
Xanthomonas oryzaoryzae
pv.oryzae
Rice tungro disease –Sheath Blight –
Rice tungroRhizoctonia Solani
virus(RTSV, RTBV)
False Smut –
Ustilaginoidea virens
Grain discolouration –
fungal complex
Leaf streak –
Xanthomonas oryzae
pv.oryzicola
Grayscale Process
In photography and computing, a grayscale or grayscale digital image is an image in which the value of
each pixel is a single sample, that is, it carries only intensity information. Images of this sort, also known as
black-and-white, are composed exclusively of shades of gray, varying from black at the weakest intensity to
white at the strongest. Grayscale images are distinct from one-bit bi-tonal black-and-white images, which in the
context of computer imaging are images with only the two colours, black, and white (also called bi-level or
binary images). Grayscale images have many shades of gray in between.
Grayscale images are often the result of measuring the intensity of light at each pixel in a single band
of the electromagnetic spectrum (e.g. infrared, visible light, ultraviolet, etc.), and in such cases they are
monochromatic proper when only a given frequency is captured. But also they can be synthesized from a full
colour image; see the section about converting to grayscale. The intensity of a pixel is expressed within a given
range between a minimum and a maximum, inclusive. This range is represented in an abstract way as a range
from 0 (total absence, black) and 1 (total presence, white), with any fractional values in between. This notation
is used in academic papers, but this does not define what "black" or "white" is in terms of colorimetric.
Another convention is to employ percentages, so the scale is then from 0% to 100%. This is used for a
more intuitive approach, but if only integer values are used, the range encompasses a total of only 101
intensities, which are insufficient to represent a broad gradient of gray. Also, the percentile notation is used in
printing to denote how much ink is employed in half toning, but then the scale is reversed, being 0% the paper
white (no ink) and 100% a solid black (full ink). In computing, although the grayscale can be computed through
rational numbers, image pixels are stored in binary, quantized form. Some early grayscale monitors can only
show up to sixteen (4-bit) different shades, but today grayscale images (as photographs) intended for visual
display (both on screen and printed) are commonly stored with 8 bits per sampled pixel, which allows 256
different intensities (i.e., shades of gray) to be recorded, typically on a non-linear scale. Technical uses (e.g. in
medical imaging or remote sensing applications) often require more levels, to make full use of the sensor
accuracy (typically 10 or 12 bits per sample) and to guard against round off errors in computations. The TIFF
and the PNG (among other) image file formats support 16-bit grayscale natively, although browsers and many
imaging programs tend to ignore the low order 8 bits of each pixel. No matter what pixel depth is used, the
binary representations assume that 0 is black and the maximum value (255 at 8 sbpp, 65,535 at 16 bpp, etc.) is
white, if not otherwise noted.Conversion of a color image to grayscale is not unique; different weighting of the
color channels effectively represent the effect of shooting black-and-
white film with different-colored photographic filters on the cameras.
 I = rgb2gray(RGB)
 new map = rgb2gray(map)
I = rgb2gray (RGB) converts the truecolor image RGB to the grayscale intensity image I. The rgb2gray
function converts RGB images to grayscale by eliminating the hue and saturation information while retaining
the luminance. If you have Parallel Computing Toolbox installed, rgb2gray can perform this conversion on a
GPU. Newmap = rgb2gray(map) returns a grayscale colormap equivalent to map
A common strategy is to use the principles of photometry or, more broadly colorimetry to match the
luminance of the grayscale image to the luminance of the original color image. This also ensures that both
Automated Crop Inspection And Pest Control Using Image Processing
28
images will have the same absolute luminance, as can be measured in its SI units of candelas per square meter,
in any given area of the image, given equal whitepoints. In addition, matching luminance provides matching
perceptual lightness measures, such as L*
(as in the 1976 CIE Lab color space) which is determined by the
luminance Y (as in the CIE 1931 XYZ color space).
To convert a color from a colorspace based on an RGB color model to a grayscale representation of its
luminance, weighted sums must be calculated in a linear RGB space, that is, after the gamma compression
function has been removed first via gamma expansion.
For the sRGB color space, gamma expansion is defined as,
Where Csrgb represents any of the three gamma-compressed sRGB primaries (Rsrgb,
Gsrgb, and Bsrgb, each in range [0,1]) and Clinear is the corresponding linear-intensity value (R, G, and B, also in
range [0,1]).
Then, luminance is calculated as a weighted sum of the three linear-intensity values. The sRGB color space is
defined in terms of the CIE 1931 linear luminance Y, which is given by,
.
The coefficients represent the measured intensity perception of typical trichromat humans, depending
on the primaries being used; in particular, human vision is most sensitive to green and least sensitive to blue. To
encode grayscale intensity in linear RGB, each of the three primaries can be set to equal the calculated linear
luminance Y (replacing R, G,B by Y,Y,Y to get this linear grayscale). Linear luminance typically needs to be
gamma compressed to get back to a conventional non-linear representation.
For sRGB, each of its three primaries is then set to the same gamma-compressed
Ysrgb given by the inverse of the gamma expansion above as,
Web browsers and other software that recognizes sRGB images will typically produce the same
rendering for a such a grayscale image as it would for an sRGB image having the same values in all three color
channels.
Fig 2. Gray scale image
II. Morphological
Transformations
Morphology is a technique of image processing based on shape and form of objects. Morphological
methods apply a structuring element to an input image, creating an output image at the same size. The value of
Automated Crop Inspection And Pest Control Using Image Processing
29
each pixel in the input image is based on a comparison of the corresponding pixel in the input image with its
neighbours. By choosing the size and shape of the neighbour, you can construct a morphological operation that
is sensitive to specific shapes in the input image.
Morphological operations such as erosion, dilation, opening, and closing. Often combinations of these
operations are used to perform morphological image analysis. There are many useful operators defined in
mathematical morphology. They are dilation, erosion, opening and closing. Morphological operations apply
structuring elements to an input image, creating an output image of the same size. Irrespective of the size of the
structuring element, the origin is located at its centre. Morphological opening is ( )( ) B f x m g and
Morphological closing is,
Where m a homothetic parameter, size is m means a square of (2m +1)´(2m +1) pixels. B is the structuring
element of size 3 × 3 (here m = 1).
Dilation is a transformation that produces an image that is the same shape as the original, but is a
different size. Dilation stretches or shrinks the original. Dilation increases thevalleys and enlarges the width of
maximum regions, so it can remove negative impulsive noises but do little on positives ones.The dilation of A
by the structuring element B is defined by:
If B has a centre on the origin, as before, then the dilation of A by B can be understood as thelocus of
the points covered by B when the centre of B moves inside A. Dilation of image f by structuring element s is
given by f Ås . The structuring element s is positioned with its origin at (x, y) and the new pixel value is
determined using the rule:
The following Fig illustrates the morphological dilation of a gray scale image. Note how the structuring
element defines the neighbourhood of the pixel of interest, which is circled. The dilation function applies the
appropriate rule to the pixels in the neighbourhood and assigns avalue to the corresponding pixel in the output
image. In the Fig, the morphological dilationfunction sets the value of the output pixel to 16 because it is the
maximum value of all the pixels in the input pixel's neighbourhood defined by the structuring element is on.
Easiest way to describe it is to imagine the same fax/text is written with a thicker pen .In a binary image, if any
of the pixels is set to the value 1, the output pixel is set to 1.
Fig 3. Dilation Process
Automated Crop Inspection And Pest Control Using Image Processing
30
It is used to reduce objects in the image and known that erosion reduces the peaks and enlarges the
widths of minimum regions, so it can remove positive noises but affect negative impulsive noises little.
The erosion of the dark blue square resulting in light blue square.The erosion of the binary image A by
the structuring element B is defined by:
In binary image, if any of the pixels is set to 0, the output pixel is set to 0.
Erosion of image f by structuring element s is given by f _s. The structuring element s ispositioned with its
origin at (x, y) and the new pixel value is determined using the rule:
In the above equation fit means all on pixel in the structuring element covers an on pixel in the image.
The following Fig illustrates the morphological erosion of a gray scale image. Note how the structuring element
defines the neighbourhood of the pixel of interest, which is circled. The dilation function applies the appropriate
rule to the pixels in the neighbourhood and assigns a value to the corresponding pixel in the output image. In the
Fig, the morphological erosion function sets the value of the output pixel to 14. The opening of A by B is
obtained by the erosion of A by B, followed by dilation of the resulting image by B:
In the case of the square of side 10, and a disc of radius 2 as the structuring element, the opening is a square of
side 10 with rounded corners, where the corner radius is 2.The sharp edges start to disappear. Opening of an
image is erosion followed by dilation with the same structuring element.
Closing of an image is the reverse of opening operation.
The closing of A by B is obtained by the dilation of A by B, followed by erosion of the resulting
structure by B:
An edge in an image is a boundary or contour at which a significant change occurs in some physical
aspect of an image, such as the surface reflectance, illumination The first method proposed is the block analysis
where the entire image is split into a number of blocks and each block is enhanced individually. The next
proposed method is the erosiondilation method which is similar to block analysis but uses morphological
operations (erosion and dilation) for the entire image rather than splitting into blocks. All these methods were
initially applied for the gray level images and later were extended to colour images by splitting the colour image
into its respective R, G and B components, individually enhancing them and concatenating them to yield the
enhanced image. All the above mentioned techniques operate on the image in the spatial domain. The final
method is the DCT where the frequency domain is used. Here we scale the D C coefficients of the image after
DCT has beentaken. The DC coefficient is adjusted as it contains the maximum information. Here, we
movefrom RGB domain to YCbCr domain for processing and in YCbCr, to adjust (scale) the DCcoefficient, i.e.
Y (0, 0). The image is converted from RGB to YCbCr domain because if theimage is enhanced without
converting, there is a good chance that it may yield an undesiredoutput image. The enhancement of images is
done using the log operator.This is takenbecause it avoids abrupt changes in lighting. For example, if 2 adjacent
pixel values are 10 and100, their difference in normal scale is 90.But in the logarithmic scale, this difference
reduces tojust 1, thus providing a perfect platform for image enhancement.Many applications to get clear and
useful information from an image which may have beenpicturaized in different conditions like poor lighting or
bright lighting, moving or still etc. ThisSection deals with background analysis of the image by blocks. In this
project,D represent thedigital space under study.
Automated Crop Inspection And Pest Control Using Image Processing
31
For each analyzedblock, maximum ( i M ) and minimum ( i m ) values are used to
determine the backgroundmeasures. is used to select the background parameters.
Background parameters lie between
clear ( ) and dark ( ) intensity
levels. If is the dark region then backgroundparameters takes the maximum
intensity levels( i M ) then is the clear region,background parameters takes the minimum intensity
levels ( i m ).
Enhance images are we get after applying the below equation,and
Let fbe the original image which is subdivided into number of blocks with each block is thesub-image
of the original image. For each and every block n, the minimum intensity i m andmaximum intensity i M values
are calculated i m and i M values are
used to find the background criteria i in the following way:
It is used as a threshold between
clear and dark
intensity levels.
Based on the value of i , the background parameter is decided for each analyzed block. Correspondingly the
contrast enhancement is expressed as follows:
It is clear that the background parameter entirely is dependent up on THe background criteria value.
For i f , the background parameter takes the maximum intensity value i Mwithin theanalyzed block, and
the minimum intensity value i m otherwise. In order to avoid indetermination condition, unit was added to the
logarithmic function.
The more is the number of blocks; the better will be quality of the enhanced image As the size of the
structuring element increases it is hard to preserve the image asblurring and contouring effects are severe. The
results are best obtained by keeping the size of the structuring element as 2 (μ=2). Sample input (left half of the
image) and output image (righthalf) for block analysis is shown below:
Fig 4. Block Analysis for Gray level images
Automated Crop Inspection And Pest Control Using Image Processing
32
This method is similar to block analysis in many ways; apart from the fact that the manipulation is
done on the image as a whole rather than partitioning it into blocks. Firstly minimum Imin (x) and maximum
intensity max I (x) contained in a structuring element (B) of elemental size3 × 3 is calculated.The above
obtained values are used to find the background criteria i as described below
Where min I (x) and max I (x) corresponds to morphological erosion and dilation
respectively.Therefore,
In this way the contrast operator can be described as in equations By employing Erosion-Dilation
method we obtain a better local analysis of the image fordetecting the background criteria than the previously
used method of Blocks. This is because thestructuring element μB permits the analysis of eight boring pixels at
each point in the image. Byincreasing the size of the structuring element more pixels will be taken into account
for finding the background criteria. It can be easily visualized that several characteristics that are notvisible at
first sight appear in the enhanced images. The trouble with this method is thatmorphological erosion or dilation
when used with large size of μ to reveal the background,undesiredvalues maybe generated. In general it is
desirable to filter an image without generating any new components. Thetransformation function which enables
to eliminate unnecessary parts without affecting otherregions of the image is defined in mathematical
morphology which is termed as transformationby reconstruction. We go for opening by reconstruction because
it restores the original shape ofthe objects in the image that remain after erosion as it touches the regional
minima and mergesthe regional maxima. This particular characteristic allows the modification of the altitude
ofregional maxima when the size of the structuring element increases thereby aiding in detectionof the
background criteria as follows:
Where opening by reconstruction is expressed as
It can be observed from the above equation that opening by reconstruction first erodes the inputimage
and uses it as a marker. Here marker image is defined because this is the image whichcontains the starting or
seed locations. For example, here the eroded image can be used as themarker. Then dilation of the eroded image
i.e. marker is performed iteratively until stability isachieved. Image background obtained from the erosion of the
opening by reconstruction.
Median filtering is a nonlinear process useful in reducing impulsive, or salt-and-pepper noise. It is also
useful in preserving edges in an image while reducing random noise. For example, suppose the pixel values
within a window are 56, 55, 10 and 15, and the pixel being processed has a value of 55. The output of the
median filteringthe current pixel location is 10, which is the median of the five values. Like median filtering,
out-range pixel smoothing is a nonlinear operation and is useful in reducing salt-and-pepper noise. If the
difference between the average and the value of the pixel processed is above some threshold, then the current
pixel value is replaced by the average. Otherwise, the value is not affected. Because it is difficult to determine
the best parameter values in advance, it may be useful to process an image using several different threshold
values and window sizes and select the best result.
Detecting edges is very useful in a no of contexts. For example in a typical image understanding task
such as object identification, an essential step is to segment an image into different regions corresponded to
different objects in the scene. Edge detection is the 1st
step in image segmentation. Another example is in the
development of a low bit rate image coding system in which we can code only edges. It is well known that an
image that consists of only edges is highly intelligible. The significance of a physical change in an image
depends on the application. An intensity change that would be classified as an edge in some application might
not be considered an edge in other application.
Automated Crop Inspection And Pest Control Using Image Processing
33
Thresholding Calculation
The simplest method of image segmentation is called the thresholding method. This method is based on
a clip-level (or a threshold value) to turn a gray scale image into a binary image.The key of this method is to
select the threshold value (or values when multiple-levels are selected). Several popular methods are used in
industryincluding the maximum entropy method, Otsu's method (maximumvariance), and k-means
clustering.[Threshold> Strip of wood or stone forming the bottom of a doorway and crossed in enteringa house
or room.A level or point at which something starts or ceases to happen or come into effect. The level atwhich
one starts to feel or react to something.]. The purpose of thresholding is to extract those pixels from some
imagewhich represent an object (either text or other line image data such asgraphs, maps). Though the
information is binary the pixels represent a range of intensities.Thus the objective of binarization is to mark
pixels that belong to trueforeground regions with a single intensity and background regions withdifferent
intensities. Thresholding is the transformation of an input image f to an output(segmented). Binary image g as
follows
g(i,j)=1 for f(i,j) >=T.
g(i,j)=0 for f(i,j) <T.
Where T is the thresholding, g(i,j)=1 for image elements of objects, and g(i,j)=0 for image elements of
backgroundIf objects do not touch each other, and if their gray-levels are clearly distinct from background
gravy-levels, thresholding is suitable segmentation method. Correct threshold selection is crucial for successful
threshold segmentation, this selection can be determined interactively or it can be the result of some threshold
detection method. Only under very unusual circumstances can threshold be successful using asingle threshold
for whole image(global thresholding) since even in very simple images there are likely to be gray-level
variations in objects and background. This variation may be due to non-uniform lighting, no uniforminput
device parameters or a number of other factors.
Segmentation using variable thresholds (adaptive thresholding), in whichthe threshold value varies over
the image as a function of local imagecharacteristics. A global threshold is determined from the whole image f
:T=T (f).Local Thresholds are position dependentT=T(f, ),Where is that image part in which the threshold is
determined. Oneoption is to divide the image f into sub images and determine a thresholdindependently in each
sub images, it can be interpolated from thresholdsdetermined in neighbouring sub images. Each sub images is
then processedwith respect to its local threshold.Basic thresholding as defined has many modifications. One
possibility is tosegment an image into regions of pixels with gray-levels from a set D andinto background
otherwise (band thresholding):
g (i,j)=1 for f (i,j) D
g (i,j)=0 otherwise.
III. Results & Measurements
There are four main steps used for the detection of plant leaf diseases:
1.RGB image acquisition.
2.Convert the input image into gray scale
3.segment the component
4.obtain the useful segment
RGB Image Acquisition
The diseased paddy image is used as input
Sheath Blight Brown spot
Fig 5. Diseased paddy
Automated Crop Inspection And Pest Control Using Image Processing
34
Convert the input image into gray scale Input colour images have primary colours Red, Green, Blue. It is not
possible to implement applications using RGB because of the range,(0-255).
Hence they convert the RGB images into the LAB images.
Fig 6. Gray scale image
Segment the Component
Image segmentation is the process of partitioning a digital image into multiple segments (sets of pixels,
also known as super-pixels). The goal of segmentation is to simplify and/or change the representation of an
image into something that is more meaningful and easier to analyze. Image segmentation is typically used to
locate objects and boundaries (lines, curves, etc.) in images. More precisely, image segmentation is the process
of assigning a label to every pixel in an image such that pixels with the same label share certain characteristics.
After segmentation of the gray scaled image, required infected region of the paddy leaf.
Fig 7. Infected Leaf
IV. CONCLUSION
There are several automated systems available in literature, which are developed for irrigation control
and environmental monitoring in the field. However, it is essential to monitor the plant growth stage by stage
and take decisions accordingly. In addition to monitoring the environmental parameters such as ph, moisture
content is essential. In olden days only by human perception they have identified the problems and they faced
many issues and its difficult. Various paddy diseases are identified using image processing which is accurate
and easy for perception.
Automated Crop Inspection And Pest Control Using Image Processing
35
REFERENCES
[1]. Ajinkya Paikekari, Vrushali Ghule, Rani Meshram, V.B. Raskar, “Weed Detection using Image
processing”, International Research Journal of Engineering and Technology, 2016, pp 1220-1222.
[2]. Jayamala.K.Patit, Rajkumar,”Advances in Image Processing for detection of plant diseases”, Journal of
Advanced Bio Informatics Applications and Research, 2011, pp 135-141.
[3]. Mrs. Latha, A Poojith, B V Amarnath Reddy, G Vittal Kumar, “ Image Processing in Agriculture”,
International Journal of Innovative Research in Electrical, Electrical, Instrumentation and Control
Engineering, 2014, pp 1562-1565.
[4]. J.S.Saijis, R.Rastogi, V.K.Chadda, “Application of Image processing in
BiologyandAgriculture”.BARC Newsletter, 2014
[5]. P.Usha,V.Maheswari, Dr.V.Nandagopal, “Design and Implementation of Seeding Agricultural Robot”,
Journal of Innovative Research and Solutions, 2015, 138-143.

More Related Content

PPTX
Exploitation techniques and fuzzing
PPTX
Windows privilege escalation by Dhruv Shah
PPT
Sql injection
PDF
Hacking identity: A Pen Tester's Guide to IAM
PPT
authentication.ppt
PDF
Pentesting GraphQL Applications
PPT
Java Networking
PPTX
Lecture # 007 AES.pptx
Exploitation techniques and fuzzing
Windows privilege escalation by Dhruv Shah
Sql injection
Hacking identity: A Pen Tester's Guide to IAM
authentication.ppt
Pentesting GraphQL Applications
Java Networking
Lecture # 007 AES.pptx

What's hot (20)

PDF
Test Driven Development With Python
PPT
SQLITE Android
PPTX
Linux privilege escalation 101
PDF
AWS Pentesting
PPT
Introduction To OWASP
PDF
Supply Chain Attacks
PDF
Chapter 1 Introduction of Cryptography and Network security
PPTX
powershell-is-dead-epic-learnings-london
PDF
Android Malware Detection Mechanisms
PDF
iOS Application Security
PPTX
Wi-Fi Security Presentation.pptx
PDF
Cyber Forensics & Challenges
PPTX
Cyber ppt
PPTX
Breaking the cyber kill chain!
PPTX
Offensive Payment Security
PDF
The Best (and Worst) of Django
PPTX
Cryptography
PPT
Elgamal Digital Signature
PPTX
Detecting modern PowerShell attacks with SIEM
PPTX
Feistel cipher
Test Driven Development With Python
SQLITE Android
Linux privilege escalation 101
AWS Pentesting
Introduction To OWASP
Supply Chain Attacks
Chapter 1 Introduction of Cryptography and Network security
powershell-is-dead-epic-learnings-london
Android Malware Detection Mechanisms
iOS Application Security
Wi-Fi Security Presentation.pptx
Cyber Forensics & Challenges
Cyber ppt
Breaking the cyber kill chain!
Offensive Payment Security
The Best (and Worst) of Django
Cryptography
Elgamal Digital Signature
Detecting modern PowerShell attacks with SIEM
Feistel cipher
Ad

Similar to Automated Crop Inspection and Pest Control Using Image Processing (20)

PDF
A study on real time plant disease diagonsis system
PDF
8. 10168 12478-1-pb
PDF
IRJET- Crop Disease Detector using Drone and Matlab
PDF
Crops diseases detection and solution system
PDF
Rice Disease Detection Using Artificial Intelligence and Machine.pdf
PDF
V1_issue1paperSEVEN_F59
PDF
A Picture Is Worth A Thousand Words
PDF
An effective identification of crop diseases using faster region based convol...
PDF
RICE PLANT DISEASE DETECTION AND REMEDIES RECOMMENDATION USING MACHINE LEARNING
PDF
Paddy field classification with MODIS-terra multi-temporal image transformati...
PDF
711201940
PDF
711201940
PDF
IRJET- Leaf Disease Detection using Image Processing
PDF
scope of artificial intelligence in agriculture
PDF
IRJET - E-Learning Package for Grape & Disease Analysis
PDF
76 s201912
PDF
A survey on plant leaf disease identification and classification by various m...
PDF
Paper id 71201958
PDF
IRJET- Analysis of Predicting Diseases for Smart Croping
PDF
IRJET- Applications of different Techniques in Agricultural System: A Review
A study on real time plant disease diagonsis system
8. 10168 12478-1-pb
IRJET- Crop Disease Detector using Drone and Matlab
Crops diseases detection and solution system
Rice Disease Detection Using Artificial Intelligence and Machine.pdf
V1_issue1paperSEVEN_F59
A Picture Is Worth A Thousand Words
An effective identification of crop diseases using faster region based convol...
RICE PLANT DISEASE DETECTION AND REMEDIES RECOMMENDATION USING MACHINE LEARNING
Paddy field classification with MODIS-terra multi-temporal image transformati...
711201940
711201940
IRJET- Leaf Disease Detection using Image Processing
scope of artificial intelligence in agriculture
IRJET - E-Learning Package for Grape & Disease Analysis
76 s201912
A survey on plant leaf disease identification and classification by various m...
Paper id 71201958
IRJET- Analysis of Predicting Diseases for Smart Croping
IRJET- Applications of different Techniques in Agricultural System: A Review
Ad

More from IJERDJOURNAL (20)

PDF
Predictive Data Mining with Normalized Adaptive Training Method for Neural Ne...
PDF
The development of the Islamic Heritage in Southeast Asia tradition and futur...
PDF
An Iot Based Smart Manifold Attendance System
PDF
A Novel Approach To Detect Trustworthy Nodes Using Audit Based Scheme For WSN
PDF
Human Resource Competencies: An Empirical Assessment
PDF
Prospects and Problems of Non-Governmental Organizations in Poverty Alleviati...
PDF
Development of Regression Model Using Lasso And Optimisation of Process Param...
PDF
Use of Satellite Data for Feasibility Study And Preliminary Design Project Re...
PDF
Microwave Assisted Sol Gel Synthesis of Magnesium Oxide(Mgo)
PDF
Development of Enhanced Frequency Drive for 3-Phase Induction Motors Submitte...
PDF
Short-Term Load Forecasting Using ARIMA Model For Karnataka State Electrical ...
PDF
Optimal Pricing Policy for a Manufacturing Inventory Model with Two Productio...
PDF
Analysis of failure behavior of shear connection in push-out specimen by thre...
PDF
Discrete Time Batch Arrival Queue with Multiple Vacations
PDF
Regional Rainfall Frequency Analysis By L-Moments Approach For Madina Region,...
PDF
Implementing Oracle Utility-Meter Data Management For Power Consumption
PDF
Business Intelligence - A Gift for Decision Maker for the Effective Decision ...
PDF
Effect of Water And Ethyl Alcohol Mixed Solvent System on the Stability of Be...
PDF
Design of Synthesizable Asynchronous FIFO And Implementation on FPGA
PDF
Prospect and Challenges of Renewable Energy Resources Exploration, Exploitati...
Predictive Data Mining with Normalized Adaptive Training Method for Neural Ne...
The development of the Islamic Heritage in Southeast Asia tradition and futur...
An Iot Based Smart Manifold Attendance System
A Novel Approach To Detect Trustworthy Nodes Using Audit Based Scheme For WSN
Human Resource Competencies: An Empirical Assessment
Prospects and Problems of Non-Governmental Organizations in Poverty Alleviati...
Development of Regression Model Using Lasso And Optimisation of Process Param...
Use of Satellite Data for Feasibility Study And Preliminary Design Project Re...
Microwave Assisted Sol Gel Synthesis of Magnesium Oxide(Mgo)
Development of Enhanced Frequency Drive for 3-Phase Induction Motors Submitte...
Short-Term Load Forecasting Using ARIMA Model For Karnataka State Electrical ...
Optimal Pricing Policy for a Manufacturing Inventory Model with Two Productio...
Analysis of failure behavior of shear connection in push-out specimen by thre...
Discrete Time Batch Arrival Queue with Multiple Vacations
Regional Rainfall Frequency Analysis By L-Moments Approach For Madina Region,...
Implementing Oracle Utility-Meter Data Management For Power Consumption
Business Intelligence - A Gift for Decision Maker for the Effective Decision ...
Effect of Water And Ethyl Alcohol Mixed Solvent System on the Stability of Be...
Design of Synthesizable Asynchronous FIFO And Implementation on FPGA
Prospect and Challenges of Renewable Energy Resources Exploration, Exploitati...

Recently uploaded (20)

PDF
Operating System & Kernel Study Guide-1 - converted.pdf
PPTX
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
PDF
R24 SURVEYING LAB MANUAL for civil enggi
PDF
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
PPTX
Foundation to blockchain - A guide to Blockchain Tech
PPTX
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
PDF
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
PDF
TFEC-4-2020-Design-Guide-for-Timber-Roof-Trusses.pdf
PDF
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
PDF
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
PPTX
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
PPTX
Geodesy 1.pptx...............................................
PPTX
Internet of Things (IOT) - A guide to understanding
PPTX
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
PPTX
UNIT 4 Total Quality Management .pptx
PPTX
CH1 Production IntroductoryConcepts.pptx
PPTX
CYBER-CRIMES AND SECURITY A guide to understanding
PPTX
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
PDF
Well-logging-methods_new................
PDF
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
Operating System & Kernel Study Guide-1 - converted.pdf
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
R24 SURVEYING LAB MANUAL for civil enggi
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
Foundation to blockchain - A guide to Blockchain Tech
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
TFEC-4-2020-Design-Guide-for-Timber-Roof-Trusses.pdf
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
Geodesy 1.pptx...............................................
Internet of Things (IOT) - A guide to understanding
KTU 2019 -S7-MCN 401 MODULE 2-VINAY.pptx
UNIT 4 Total Quality Management .pptx
CH1 Production IntroductoryConcepts.pptx
CYBER-CRIMES AND SECURITY A guide to understanding
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
Well-logging-methods_new................
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk

Automated Crop Inspection and Pest Control Using Image Processing

  • 1. International Journal of Engineering Research and Development e-ISSN: 2278-067X, p-ISSN: 2278-800X, www.ijerd.com Volume 13, Issue 3 (March 2017), PP.25-35 25 Automated Crop Inspection and Pest Control Using Image Processing J.Mahalakshmi1, Pg Student, G.Shanthakumari2, 1 sri Sairam Engineering College,Chennai, India. 2 Asst Professor, Department Of Ece, Sri Sairam Engineering College, Chennai, India. ABSTRACT: Agriculture is the backbone of our country. India is an agricultural country where the most of the population depends on agriculture. Research in agriculture is aimed towards increasing productivity and profit. There are several automated systems available in literature, which are developed for irrigation control and environmental monitoring in the field. However, it is essential to monitor the plant growth stage by stage and take decisions accordingly. In addition to monitoring the environmental parameters such as pH, moisture content and temperature, it is inevitable to identify the onset of plant diseases too. It is the key to prevent the losses in yield and quantity of agricultural product. Plant disease identification by continuous visual monitoring is very difficult task to farmers and at the same time it is less accurate and can be done in limited areas. Hence this projects aims at developing an image processing algorithm to identify the diseases in rice plant. Rice blast disease occurring in rice plant is due to magnaporthe grisea and this disease also occurs in wheat, rye, barley, pearl and millet. Due to rice blast disease, 60 million people are affected in 85 countries worldwide. Image processing technique is adopted as it is more accurate. Early disease detection can increase the crop production by inducing proper pesticide usage. I. INTRODUCTION Hardware prototype of the proposed system will be developed by using Arm processor. The agricultural sector plays an important role in economic development by providing rural employment. Management of paddy plant from early stage to nature harvest state increases the yield. Paddy is one of the nation’s most important products as it is considered as to be one of India stable food and cereal crops and because of that, many efforts have taken to ensure its safety, one of them is monitoring of paddy plant. Paddy plants are affected by various fungi. This work focus on recognizing paddy plant disease namely rice blast disease. The proper detection and recognizing of the disease is very important for applying fertilizer. There will be a decline in the production, if the diseases are not recognized at early stage. The main goal of this work is to build up an image recognition system that identifies the paddy plant diseases affecting the cultivation of paddy. The system also monitors the environmental parameters. The paddy leaf image is given as input image. To reduce correlated colour information, convert RGB image to gray scale. Morphological process consists of erosion, dilation, opening and closing operation. Apply morphological opening operation on a gray scale paddy leaf image to reduce the small noise. Morphological closing operation is used to smooth the leaf structure and to fuse the narrow breaks. From the process, it leads to thresholding calculation. In this image segmentation is done here. After this process, the infected region of the paddy leaf is found. There are four main steps used for the detection of plant leaf diseases: 1) RGB image acquisition. 2) Convert the input image into gray scale 3) Segment the component 4) Obtain the useful segment In Image Processing, whose goal is to investigate the history of an image using passive approaches, has emerged as a possible way to solve the above crisis. The basic idea underlying Image Processing is that most, if not all, image processing tools leave some (usually imperceptible) traces into the processed image, and hence the presence of these traces can be investigated in order to understand whether the image has undergone some kind of processing or not. In the last years many algorithms for detecting different kinds of traces have been proposed which usually extract a set of features from the image and use them to classify the content as exposing the trace or not. Computerized analysis of medical images is gaining importance day by day, as it often produces higher sensitivity irrespective of experience of the analyst. For this reason various image analysis techniques are used for automatic detection of various diseases in Paddy. Various methods have been developed for detection
  • 2. Automated Crop Inspection And Pest Control Using Image Processing 26 of exudates. These include thresholding and edge detection based techniques, FCM based approach, gray level variation based approach, multilayer perception based approach. Two principal research paths evolve under the name of Digital Image Processing. The first one includes methods that attempt at answering question, was the image captured by the device it is claimed to be acquired with? By performing some kind of ballistic analysis to identify the device that captured the image or at least to determine which devices did not capture it. The history of a digital image can be represented as a composition of several steps, collected into three main phases: acquisition, coding, and editing. These methods will be collected in the following under the common name of image source device identification techniques. The second group of methods aims instead at exposing traces of semantic manipulation (i.e. forgeries) by studying inconsistencies in natural image statistics. Digital image processing allows one to enhance image features of interest while attenuating detail irrelevant to a given application, and then extract useful information about the scene from the enhanced image. This introduction is a practical guide to the challenges, and the hardware and algorithms used to meet them. Images are produced by a variety of physical devices.Such processing is called image enhancement; processing by an observer to extract information is called image analysis. Enhancement and analysis are distinguished by their output, images Vs scene information, and by the challenges faced and methods employed. Image enhancement has been done by chemical, optical, and electronic means, while analysis has been done mostly by humans and electronically. Digital image processing is a subset of the electronic domain wherein the image is converted to an array of small integers called pixels, representing a physical quantity such as scene radiance stored in a digital memory, and processed by computer or other digital hardware. Digital image processing, either as enhancement for human observers or performing autonomous analysis, offers advantages in cost, speed, and flexibility, and with the rapidly falling price and rising performance of personal computers it has become the dominant method in use Fig 1. Block Diagram Paddy: A paddy field is a flooded parcel of arable land used for rowing semiaquatic rice. Paddy cultivation should not be confused with cultivation of deep water rice, which is grown in flooded conditions with water more than 50 cm (20 in) deep for at least a month. Genetic evidence shows that all forms of paddy rice, both indica and japonica, spring from a domestication of the wild rice Oryza rufipogon that first occurred 8,200– 13,500 years ago South of the Yangtze River in present-day China. However, the domesticated indica subspecies currently appears to be a product of the introgression of favorable alleles from japonica at a later date, so that there are possibly several events of cultivation and domestication. Paddy fields are the typical feature of rice farming in east, southand southeast Asia. Fields can be built into steep hillsides as terraces and adjacent to depressed or steeply sloped features such as rivers or marshes. They can require a great deal of labor and materials to create, and need large quantities of water for irrigation. Oxen and water buffalo, adapted for life in wetlands, are important working animals used extensively in paddy field farming. During the 20th century, paddy-field farming became the dominant form of growing rice. Hill tribes of Thailand still cultivate dry-soil varieties called upland rice. Paddy field farming is practiced in Cambodia, Bangladesh, China, Taiwan, India, Indonesia, Iran, Japan, North Korea, South Korea, Malaysia, Myanmar, Nepal, Pakistan, the Philippines, Sri Lanka, Thailand, Vietnam, and Laos, Northern Italy, the Camargue in France, the Artibonite Valley in Haiti, and Sacramento Valley in California. Paddy fields are a major source of atmospheric methane and have been estimated to contribute in the range of 50 to 100 million tonnes of the gas per annum. Studies have shown that this can be significantly reduced while also boosting crop yield by draining the paddies to allow the soil to aerate to interrupt methane production, Studies have also shown the variability in assessment of methane emission using local, regional and global factors and calling for better inventorisation based on micro level data.
  • 3. Automated Crop Inspection And Pest Control Using Image Processing 27 Table 1 Types of Diseases Nursery diseases Main field diseases Blast –PyriculariaBrown spot – grisea(P.oryzae) Helminthosporium oryzae Bacterial Leaf Blight –Sheath Rot- Sarocladium Xanthomonas oryzaoryzae pv.oryzae Rice tungro disease –Sheath Blight – Rice tungroRhizoctonia Solani virus(RTSV, RTBV) False Smut – Ustilaginoidea virens Grain discolouration – fungal complex Leaf streak – Xanthomonas oryzae pv.oryzicola Grayscale Process In photography and computing, a grayscale or grayscale digital image is an image in which the value of each pixel is a single sample, that is, it carries only intensity information. Images of this sort, also known as black-and-white, are composed exclusively of shades of gray, varying from black at the weakest intensity to white at the strongest. Grayscale images are distinct from one-bit bi-tonal black-and-white images, which in the context of computer imaging are images with only the two colours, black, and white (also called bi-level or binary images). Grayscale images have many shades of gray in between. Grayscale images are often the result of measuring the intensity of light at each pixel in a single band of the electromagnetic spectrum (e.g. infrared, visible light, ultraviolet, etc.), and in such cases they are monochromatic proper when only a given frequency is captured. But also they can be synthesized from a full colour image; see the section about converting to grayscale. The intensity of a pixel is expressed within a given range between a minimum and a maximum, inclusive. This range is represented in an abstract way as a range from 0 (total absence, black) and 1 (total presence, white), with any fractional values in between. This notation is used in academic papers, but this does not define what "black" or "white" is in terms of colorimetric. Another convention is to employ percentages, so the scale is then from 0% to 100%. This is used for a more intuitive approach, but if only integer values are used, the range encompasses a total of only 101 intensities, which are insufficient to represent a broad gradient of gray. Also, the percentile notation is used in printing to denote how much ink is employed in half toning, but then the scale is reversed, being 0% the paper white (no ink) and 100% a solid black (full ink). In computing, although the grayscale can be computed through rational numbers, image pixels are stored in binary, quantized form. Some early grayscale monitors can only show up to sixteen (4-bit) different shades, but today grayscale images (as photographs) intended for visual display (both on screen and printed) are commonly stored with 8 bits per sampled pixel, which allows 256 different intensities (i.e., shades of gray) to be recorded, typically on a non-linear scale. Technical uses (e.g. in medical imaging or remote sensing applications) often require more levels, to make full use of the sensor accuracy (typically 10 or 12 bits per sample) and to guard against round off errors in computations. The TIFF and the PNG (among other) image file formats support 16-bit grayscale natively, although browsers and many imaging programs tend to ignore the low order 8 bits of each pixel. No matter what pixel depth is used, the binary representations assume that 0 is black and the maximum value (255 at 8 sbpp, 65,535 at 16 bpp, etc.) is white, if not otherwise noted.Conversion of a color image to grayscale is not unique; different weighting of the color channels effectively represent the effect of shooting black-and- white film with different-colored photographic filters on the cameras.  I = rgb2gray(RGB)  new map = rgb2gray(map) I = rgb2gray (RGB) converts the truecolor image RGB to the grayscale intensity image I. The rgb2gray function converts RGB images to grayscale by eliminating the hue and saturation information while retaining the luminance. If you have Parallel Computing Toolbox installed, rgb2gray can perform this conversion on a GPU. Newmap = rgb2gray(map) returns a grayscale colormap equivalent to map A common strategy is to use the principles of photometry or, more broadly colorimetry to match the luminance of the grayscale image to the luminance of the original color image. This also ensures that both
  • 4. Automated Crop Inspection And Pest Control Using Image Processing 28 images will have the same absolute luminance, as can be measured in its SI units of candelas per square meter, in any given area of the image, given equal whitepoints. In addition, matching luminance provides matching perceptual lightness measures, such as L* (as in the 1976 CIE Lab color space) which is determined by the luminance Y (as in the CIE 1931 XYZ color space). To convert a color from a colorspace based on an RGB color model to a grayscale representation of its luminance, weighted sums must be calculated in a linear RGB space, that is, after the gamma compression function has been removed first via gamma expansion. For the sRGB color space, gamma expansion is defined as, Where Csrgb represents any of the three gamma-compressed sRGB primaries (Rsrgb, Gsrgb, and Bsrgb, each in range [0,1]) and Clinear is the corresponding linear-intensity value (R, G, and B, also in range [0,1]). Then, luminance is calculated as a weighted sum of the three linear-intensity values. The sRGB color space is defined in terms of the CIE 1931 linear luminance Y, which is given by, . The coefficients represent the measured intensity perception of typical trichromat humans, depending on the primaries being used; in particular, human vision is most sensitive to green and least sensitive to blue. To encode grayscale intensity in linear RGB, each of the three primaries can be set to equal the calculated linear luminance Y (replacing R, G,B by Y,Y,Y to get this linear grayscale). Linear luminance typically needs to be gamma compressed to get back to a conventional non-linear representation. For sRGB, each of its three primaries is then set to the same gamma-compressed Ysrgb given by the inverse of the gamma expansion above as, Web browsers and other software that recognizes sRGB images will typically produce the same rendering for a such a grayscale image as it would for an sRGB image having the same values in all three color channels. Fig 2. Gray scale image II. Morphological Transformations Morphology is a technique of image processing based on shape and form of objects. Morphological methods apply a structuring element to an input image, creating an output image at the same size. The value of
  • 5. Automated Crop Inspection And Pest Control Using Image Processing 29 each pixel in the input image is based on a comparison of the corresponding pixel in the input image with its neighbours. By choosing the size and shape of the neighbour, you can construct a morphological operation that is sensitive to specific shapes in the input image. Morphological operations such as erosion, dilation, opening, and closing. Often combinations of these operations are used to perform morphological image analysis. There are many useful operators defined in mathematical morphology. They are dilation, erosion, opening and closing. Morphological operations apply structuring elements to an input image, creating an output image of the same size. Irrespective of the size of the structuring element, the origin is located at its centre. Morphological opening is ( )( ) B f x m g and Morphological closing is, Where m a homothetic parameter, size is m means a square of (2m +1)´(2m +1) pixels. B is the structuring element of size 3 × 3 (here m = 1). Dilation is a transformation that produces an image that is the same shape as the original, but is a different size. Dilation stretches or shrinks the original. Dilation increases thevalleys and enlarges the width of maximum regions, so it can remove negative impulsive noises but do little on positives ones.The dilation of A by the structuring element B is defined by: If B has a centre on the origin, as before, then the dilation of A by B can be understood as thelocus of the points covered by B when the centre of B moves inside A. Dilation of image f by structuring element s is given by f Ås . The structuring element s is positioned with its origin at (x, y) and the new pixel value is determined using the rule: The following Fig illustrates the morphological dilation of a gray scale image. Note how the structuring element defines the neighbourhood of the pixel of interest, which is circled. The dilation function applies the appropriate rule to the pixels in the neighbourhood and assigns avalue to the corresponding pixel in the output image. In the Fig, the morphological dilationfunction sets the value of the output pixel to 16 because it is the maximum value of all the pixels in the input pixel's neighbourhood defined by the structuring element is on. Easiest way to describe it is to imagine the same fax/text is written with a thicker pen .In a binary image, if any of the pixels is set to the value 1, the output pixel is set to 1. Fig 3. Dilation Process
  • 6. Automated Crop Inspection And Pest Control Using Image Processing 30 It is used to reduce objects in the image and known that erosion reduces the peaks and enlarges the widths of minimum regions, so it can remove positive noises but affect negative impulsive noises little. The erosion of the dark blue square resulting in light blue square.The erosion of the binary image A by the structuring element B is defined by: In binary image, if any of the pixels is set to 0, the output pixel is set to 0. Erosion of image f by structuring element s is given by f _s. The structuring element s ispositioned with its origin at (x, y) and the new pixel value is determined using the rule: In the above equation fit means all on pixel in the structuring element covers an on pixel in the image. The following Fig illustrates the morphological erosion of a gray scale image. Note how the structuring element defines the neighbourhood of the pixel of interest, which is circled. The dilation function applies the appropriate rule to the pixels in the neighbourhood and assigns a value to the corresponding pixel in the output image. In the Fig, the morphological erosion function sets the value of the output pixel to 14. The opening of A by B is obtained by the erosion of A by B, followed by dilation of the resulting image by B: In the case of the square of side 10, and a disc of radius 2 as the structuring element, the opening is a square of side 10 with rounded corners, where the corner radius is 2.The sharp edges start to disappear. Opening of an image is erosion followed by dilation with the same structuring element. Closing of an image is the reverse of opening operation. The closing of A by B is obtained by the dilation of A by B, followed by erosion of the resulting structure by B: An edge in an image is a boundary or contour at which a significant change occurs in some physical aspect of an image, such as the surface reflectance, illumination The first method proposed is the block analysis where the entire image is split into a number of blocks and each block is enhanced individually. The next proposed method is the erosiondilation method which is similar to block analysis but uses morphological operations (erosion and dilation) for the entire image rather than splitting into blocks. All these methods were initially applied for the gray level images and later were extended to colour images by splitting the colour image into its respective R, G and B components, individually enhancing them and concatenating them to yield the enhanced image. All the above mentioned techniques operate on the image in the spatial domain. The final method is the DCT where the frequency domain is used. Here we scale the D C coefficients of the image after DCT has beentaken. The DC coefficient is adjusted as it contains the maximum information. Here, we movefrom RGB domain to YCbCr domain for processing and in YCbCr, to adjust (scale) the DCcoefficient, i.e. Y (0, 0). The image is converted from RGB to YCbCr domain because if theimage is enhanced without converting, there is a good chance that it may yield an undesiredoutput image. The enhancement of images is done using the log operator.This is takenbecause it avoids abrupt changes in lighting. For example, if 2 adjacent pixel values are 10 and100, their difference in normal scale is 90.But in the logarithmic scale, this difference reduces tojust 1, thus providing a perfect platform for image enhancement.Many applications to get clear and useful information from an image which may have beenpicturaized in different conditions like poor lighting or bright lighting, moving or still etc. ThisSection deals with background analysis of the image by blocks. In this project,D represent thedigital space under study.
  • 7. Automated Crop Inspection And Pest Control Using Image Processing 31 For each analyzedblock, maximum ( i M ) and minimum ( i m ) values are used to determine the backgroundmeasures. is used to select the background parameters. Background parameters lie between clear ( ) and dark ( ) intensity levels. If is the dark region then backgroundparameters takes the maximum intensity levels( i M ) then is the clear region,background parameters takes the minimum intensity levels ( i m ). Enhance images are we get after applying the below equation,and Let fbe the original image which is subdivided into number of blocks with each block is thesub-image of the original image. For each and every block n, the minimum intensity i m andmaximum intensity i M values are calculated i m and i M values are used to find the background criteria i in the following way: It is used as a threshold between clear and dark intensity levels. Based on the value of i , the background parameter is decided for each analyzed block. Correspondingly the contrast enhancement is expressed as follows: It is clear that the background parameter entirely is dependent up on THe background criteria value. For i f , the background parameter takes the maximum intensity value i Mwithin theanalyzed block, and the minimum intensity value i m otherwise. In order to avoid indetermination condition, unit was added to the logarithmic function. The more is the number of blocks; the better will be quality of the enhanced image As the size of the structuring element increases it is hard to preserve the image asblurring and contouring effects are severe. The results are best obtained by keeping the size of the structuring element as 2 (μ=2). Sample input (left half of the image) and output image (righthalf) for block analysis is shown below: Fig 4. Block Analysis for Gray level images
  • 8. Automated Crop Inspection And Pest Control Using Image Processing 32 This method is similar to block analysis in many ways; apart from the fact that the manipulation is done on the image as a whole rather than partitioning it into blocks. Firstly minimum Imin (x) and maximum intensity max I (x) contained in a structuring element (B) of elemental size3 × 3 is calculated.The above obtained values are used to find the background criteria i as described below Where min I (x) and max I (x) corresponds to morphological erosion and dilation respectively.Therefore, In this way the contrast operator can be described as in equations By employing Erosion-Dilation method we obtain a better local analysis of the image fordetecting the background criteria than the previously used method of Blocks. This is because thestructuring element μB permits the analysis of eight boring pixels at each point in the image. Byincreasing the size of the structuring element more pixels will be taken into account for finding the background criteria. It can be easily visualized that several characteristics that are notvisible at first sight appear in the enhanced images. The trouble with this method is thatmorphological erosion or dilation when used with large size of μ to reveal the background,undesiredvalues maybe generated. In general it is desirable to filter an image without generating any new components. Thetransformation function which enables to eliminate unnecessary parts without affecting otherregions of the image is defined in mathematical morphology which is termed as transformationby reconstruction. We go for opening by reconstruction because it restores the original shape ofthe objects in the image that remain after erosion as it touches the regional minima and mergesthe regional maxima. This particular characteristic allows the modification of the altitude ofregional maxima when the size of the structuring element increases thereby aiding in detectionof the background criteria as follows: Where opening by reconstruction is expressed as It can be observed from the above equation that opening by reconstruction first erodes the inputimage and uses it as a marker. Here marker image is defined because this is the image whichcontains the starting or seed locations. For example, here the eroded image can be used as themarker. Then dilation of the eroded image i.e. marker is performed iteratively until stability isachieved. Image background obtained from the erosion of the opening by reconstruction. Median filtering is a nonlinear process useful in reducing impulsive, or salt-and-pepper noise. It is also useful in preserving edges in an image while reducing random noise. For example, suppose the pixel values within a window are 56, 55, 10 and 15, and the pixel being processed has a value of 55. The output of the median filteringthe current pixel location is 10, which is the median of the five values. Like median filtering, out-range pixel smoothing is a nonlinear operation and is useful in reducing salt-and-pepper noise. If the difference between the average and the value of the pixel processed is above some threshold, then the current pixel value is replaced by the average. Otherwise, the value is not affected. Because it is difficult to determine the best parameter values in advance, it may be useful to process an image using several different threshold values and window sizes and select the best result. Detecting edges is very useful in a no of contexts. For example in a typical image understanding task such as object identification, an essential step is to segment an image into different regions corresponded to different objects in the scene. Edge detection is the 1st step in image segmentation. Another example is in the development of a low bit rate image coding system in which we can code only edges. It is well known that an image that consists of only edges is highly intelligible. The significance of a physical change in an image depends on the application. An intensity change that would be classified as an edge in some application might not be considered an edge in other application.
  • 9. Automated Crop Inspection And Pest Control Using Image Processing 33 Thresholding Calculation The simplest method of image segmentation is called the thresholding method. This method is based on a clip-level (or a threshold value) to turn a gray scale image into a binary image.The key of this method is to select the threshold value (or values when multiple-levels are selected). Several popular methods are used in industryincluding the maximum entropy method, Otsu's method (maximumvariance), and k-means clustering.[Threshold> Strip of wood or stone forming the bottom of a doorway and crossed in enteringa house or room.A level or point at which something starts or ceases to happen or come into effect. The level atwhich one starts to feel or react to something.]. The purpose of thresholding is to extract those pixels from some imagewhich represent an object (either text or other line image data such asgraphs, maps). Though the information is binary the pixels represent a range of intensities.Thus the objective of binarization is to mark pixels that belong to trueforeground regions with a single intensity and background regions withdifferent intensities. Thresholding is the transformation of an input image f to an output(segmented). Binary image g as follows g(i,j)=1 for f(i,j) >=T. g(i,j)=0 for f(i,j) <T. Where T is the thresholding, g(i,j)=1 for image elements of objects, and g(i,j)=0 for image elements of backgroundIf objects do not touch each other, and if their gray-levels are clearly distinct from background gravy-levels, thresholding is suitable segmentation method. Correct threshold selection is crucial for successful threshold segmentation, this selection can be determined interactively or it can be the result of some threshold detection method. Only under very unusual circumstances can threshold be successful using asingle threshold for whole image(global thresholding) since even in very simple images there are likely to be gray-level variations in objects and background. This variation may be due to non-uniform lighting, no uniforminput device parameters or a number of other factors. Segmentation using variable thresholds (adaptive thresholding), in whichthe threshold value varies over the image as a function of local imagecharacteristics. A global threshold is determined from the whole image f :T=T (f).Local Thresholds are position dependentT=T(f, ),Where is that image part in which the threshold is determined. Oneoption is to divide the image f into sub images and determine a thresholdindependently in each sub images, it can be interpolated from thresholdsdetermined in neighbouring sub images. Each sub images is then processedwith respect to its local threshold.Basic thresholding as defined has many modifications. One possibility is tosegment an image into regions of pixels with gray-levels from a set D andinto background otherwise (band thresholding): g (i,j)=1 for f (i,j) D g (i,j)=0 otherwise. III. Results & Measurements There are four main steps used for the detection of plant leaf diseases: 1.RGB image acquisition. 2.Convert the input image into gray scale 3.segment the component 4.obtain the useful segment RGB Image Acquisition The diseased paddy image is used as input Sheath Blight Brown spot Fig 5. Diseased paddy
  • 10. Automated Crop Inspection And Pest Control Using Image Processing 34 Convert the input image into gray scale Input colour images have primary colours Red, Green, Blue. It is not possible to implement applications using RGB because of the range,(0-255). Hence they convert the RGB images into the LAB images. Fig 6. Gray scale image Segment the Component Image segmentation is the process of partitioning a digital image into multiple segments (sets of pixels, also known as super-pixels). The goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze. Image segmentation is typically used to locate objects and boundaries (lines, curves, etc.) in images. More precisely, image segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain characteristics. After segmentation of the gray scaled image, required infected region of the paddy leaf. Fig 7. Infected Leaf IV. CONCLUSION There are several automated systems available in literature, which are developed for irrigation control and environmental monitoring in the field. However, it is essential to monitor the plant growth stage by stage and take decisions accordingly. In addition to monitoring the environmental parameters such as ph, moisture content is essential. In olden days only by human perception they have identified the problems and they faced many issues and its difficult. Various paddy diseases are identified using image processing which is accurate and easy for perception.
  • 11. Automated Crop Inspection And Pest Control Using Image Processing 35 REFERENCES [1]. Ajinkya Paikekari, Vrushali Ghule, Rani Meshram, V.B. Raskar, “Weed Detection using Image processing”, International Research Journal of Engineering and Technology, 2016, pp 1220-1222. [2]. Jayamala.K.Patit, Rajkumar,”Advances in Image Processing for detection of plant diseases”, Journal of Advanced Bio Informatics Applications and Research, 2011, pp 135-141. [3]. Mrs. Latha, A Poojith, B V Amarnath Reddy, G Vittal Kumar, “ Image Processing in Agriculture”, International Journal of Innovative Research in Electrical, Electrical, Instrumentation and Control Engineering, 2014, pp 1562-1565. [4]. J.S.Saijis, R.Rastogi, V.K.Chadda, “Application of Image processing in BiologyandAgriculture”.BARC Newsletter, 2014 [5]. P.Usha,V.Maheswari, Dr.V.Nandagopal, “Design and Implementation of Seeding Agricultural Robot”, Journal of Innovative Research and Solutions, 2015, 138-143.