SlideShare a Scribd company logo
International Journal of Information Sciences and Techniques (IJIST) Vol.4, No.3, May 2014
DOI : 10.5121/ijist.2014.4320 165
TRACKING AND COUNTING THE
VEHICLES IN NIGHT SCENES
S. Kanagamalliga1
, Dr. S. Vasuki2
, A. Kanimozhidevi3
, S. Priyadharshni 4
, S.
Rajeswari5
1
Assistant Professor, 2
Professor and Head, 3,4,5
U.G Student
Department of Electronics and Communication Engineering, Velammal College of
Engineering and Technology, Madurai
ABSTRACT:
The main objective of traffic surveillance system is to reduce the risk caused by accident. Many papers
published were concerned only about the vehicle detection during daytime. But we proposed a method to
detect the vehicle during night time. The main objective of our paper is to count the vehicles in night
time traffic scenes. It consists of three phases, they are vehicle’s headlight segmentation, tracking and
pairing. We know that in night time, only visible thing will be the headlight of the vehicle therefore we
are counting the vehicle based on the headlight information. Initially the headlights of a vehicle is
segmented based on the analysis of headlight size, location and area. Then tracked via a tracking
procedure designed to detect vehicle and then headlights are paired using the connected component
labeling. Based on the pairing result we can count the vehicles. Here we are using fuzzy hybrid
information inference mechanism for error compensation. Here by using fuzzy logic we can generate the
rules based on the size, color and position. In particular we are generating the rules based on the
distance between the headlights of a vehicle.Thereby we can improve the accuracy while counting the
vehicle.
Keywords:
Headlight pairing,headlight tracking, bidirectional algorithm ,intelligent transportation system
I INTRODUCTION:
Traffic surveillance system extracts accurate traffic information for traffic flow control such as
vehicle count, vehicle classification and vehicle speed..Hence in the day time counting the
vehicle will be somehow easier than in the night time because the vehicles will be visible in the
day time whereas in the night time those vehicles will not be visible due to dim light and also
that there will be a strong reflection on road surface due to vehicle’s headlights. Hence it will
be difficult to calculate the vehicles in the night time.And also it is noted that in night time only
the headlight of the vehicle will have high intensity value.Therefore by that information we can
detect the vehicles in night scenes.In our existing paper,they concentrated on night time vehicle
headlight detection and tracking in gray scale image. During the tracking and pairing the
headlight, bidirectional algorithm has been used.In particular they used a vanishing point
calculation for tracking and pairing the vehicle’s headlight.There is a problem in the vanishing
International Journal of Information Sciences and Techniques (IJIST) Vol.4, No.3, May 2014
166
point calculation due to this counting will not be an accurate one we will see about this in detail
existing work description.. In our paper, we present a approach for vehicle headlight detection,
tracking, and pairing in night time traffic surveillance video. Through a simple yet effective
bidirectional reasoning algorithm by incorporating the size, position, edge based, and motion
information. Finally, the trajectory of the vehicle’s headlight is employed to calibrate the
surveillance camera to differentiate moving vehicles from motionless background scenes based
on change detection. Other studies [3][4] use spatial–temporal difference features to segment
moving vehicles, while the methods in utilize techniques based on background subtraction to extract
moving vehicles.
III EXISTING WORK DESCRPTION
The headlights of the vehicle’s are detected,tracked and paired in night night time by the
existing work.It contains two features in headlight detection of the vehicle. They are reflection
intensity map and suppressed map. The reflection intensity map is obtained by the light
attenuation model and the reflection suppressed map is obtained by the laplacian of Gaussian
filter.From this the reflections on the road surface can removed.Then the headlights are tracked
and paired by using the bidirection reasoning algorithm.Through this the number of vehicles
can be counted. Finally, vehicle’s speed has estimated by using the surveillance camera.The
bidirectional reasoning obtained by calculating the position,size, vanishing point and the
motion information.The following tabulation represents the number of vehicles counted per
frame by using the vanishing point calculation.
Fig(i) tabulation represents the number of vehicles detected per frame
III PROBLEM IN THE EXISITING WORK
In this exisiting work vanishing point calculation has been used for pairing the headlights of the
vehicle. There might be a problem occur if one of the headlights of the four wheeler is wrongly
paired with the two wheeler headlight.At that time it may include that wrongly paired vehicle
headlights also one of the count.The pairing of vehicle headlights by using the vanishing point
calculation is shown in this figure(a).From this it is noted that the one of the headlights of the
some cars are wrongly paired with the one of the headlights of the other cars.So the existing
work will not give the correct result.
S.NO NUMBER OF
VEHICLES
DETECTION
TIME
ACCURACY
1 25 1.2944 92.5%
2 19 0.3419 93.4%
3 21 0.3230 94.5%
4 18 0.3012 95.9%
International Journal of Information Sciences and Techniques (IJIST) Vol.4, No.3, May 2014
167
Fig(a) shows the pairing of vehicle’s headlight using vanishing point calculation.
Fig(b ) graph has been plotted number of frames versus count
III PROJECT DESCRIPTION
In our proposed we are counting the vehicles in night time in a traffic area. The proposed
method has three phases. They are preprocessing, segmentation, tracking and pairing the
vehicle’s headlight. Intially the captured video should be converted into the frame and we
subject this frames for preprocessing. By using the filters the noise has been removed from the
frame. Here we are counting the vehicle in night time. Therefore in night time only vehicle’s
headlight will be visible.Hence with the help of headlight information only we can count the
vehicles for that first we have to segment those vehicle’s headlight. For segmenting the
vehicle’s headlight we are using the edge based technique. The main objective of segmentation
is to eliminate the road reflection due to vehicle’s headlight. Once the headlight has been
segmented then those headlight have to be tracked and paired. For pairing the vehicle headlight
we are using the bidirectional reasoning algorithm. for tracking the vehicle’s headlight we are
using the connected component labeling(CCL).Based on paired results we can count the vehicle
in night scenes. And also it is noted for error compensation we using the fuzzy hybrid
interference information mechanism(FHIIM).By using the fuzzy logic we can generate the rule.
Rule can be generated based on size, shape, color, distance between two headlights in a vehicle.
International Journal of Information Sciences and Techniques (IJIST) Vol.4, No.3, May 2014
168
In particular we are generating the rules based on the distance between the two headlights of a
vehicle. Thereby we can improve the accuracy while counting the vehicle.
(c ) (d)
Fig(c) and(d) represents the input video and the preprocessed image.
IV MODULE DESCRIPTION
A.LOAD VIDEO:
The input video can be loaded and it is converted into frames by using [X] = aviread(...)
A = aviread(pathname,filename) reads a grayscale or color image from the file specified by the
string filename. If the file is not in the current folder, or in a folder on the MATLAB path,
specify the full pathname. The input video is in .avi format. Here are Converting into frames as
rate as 15frames paer second.The video which we used here is a car traffic video.For our video
we can get (20 X 10) frames,Totally we can get 200 frames.
B.PREPROCESSING:
Preprocessing is nothing but removal of noise from frames. There are several filters to remove
the noise, in particular we are using the Gaussian filter. The noise may be due to displacement
of lens in our camera, or while transformation of an image.As shown in fig(d) . The Gaussian
filter is a non-uniform low pass filter. Generally noise may get included due to change in
atmospheric condition.It is important to remove noise to make sure that vehicle’s headlights are
very clear.
C.EDGE BASED DETECTION:
Edge detection is the approach used most frequently for segmenting images based on the
abrupt(local) changes in intensity. Edge detection is used to detect and link edge pixels to form
contours. There are several steps involved in edge detection they are smoothing, enhancement,
detection and localization. In this paper the edges are detected based on gradient edge detection
International Journal of Information Sciences and Techniques (IJIST) Vol.4, No.3, May 2014
169
because laplacian edge detection are fail to detect the edge with high intensity point variations
and also it is impossible to detect the image which has so much of noise.It is noted that there
are several detectors in gradient edge detection.Here we are using canny edge detector to detect
the edges in congested scenes
D.BRIGHT OBJECT EXTRACTION:
For extracting the headlights here we are using the thresholding technique. We know that in the
night time, headlights pixels will have a high intensity value. Based on that we can segment the
headlight. Suppose that the gray-level histogram corresponds to an image, f(x,y), composed of
dark objects in a light background, in such a way that object and background pixels have gray
levels grouped into two dominant modes. One obvious way to extract the objects from the
background is to select a threshold ‘T’ that separates these modes. Then any point (x,y) for
which f(x,y) > T is called an object point, otherwise, the point is called a background point.
As a result we can get the bright objects which consists of vehicle’s headlight and also other
bright objects.
E.MORPHOLOGICAL OPERATION
Even though the bright objects are extracted it is important to extract the headlights of a vehicle
separately. We know that in night time road reflection cause a major problem. Hence it is
important to remove those reflection, for that here we are using the morphological operation. It
is noted that some these reflection may reduce the accuracy of counting. That why it is
important to suppress the road reflection. There are several types of morphological operators in
particular we are using opening and closing operator.As a result we can get only the headlight
of the vehicle.
F.VEHICLE TRACKING AND PAIRING:
From the morphological operation we could get the binary image. For tracking and pairing the
vehicle headlight we are using bidirectional algorithm. The algorithm works in the following
way. According to the intensity value, the pixels are grouped. If the grouped pixels are of
similar size then it is consider as two headlights are belongs to the same vehicle. Thus the
headlights are detected from the vehicle. From the grouped pixels centroid point is calculated.
If any of the centroid point are lie on the same line then the two headlights are paired as a result
vehicle count has been increamented. vehicle tracking and identification process includes three
phases. First, the vehicle component tracking process is associated with the motion relation of
vehicle components in succeeding frames by analyzing their spatial and temporal features.
Then, the phase of motion-based grouping process is applied to the tracked vehicle components
to construct whole moving vehicles. These moving vehicles are then tracked in the vehicle
tracking phase. Finally, the vehicle recognition phase identifies and classifies the types of
tracked vehicles.
International Journal of Information Sciences and Techniques (IJIST) Vol.4, No.3, May 2014
170
V.EXPERIMENTAL RESULT
.
(e) (f)
The Fig(e) represents the edge detection output and fig(f) represents the bright light extraction
(g) (h)
The fig (g ) represents the output of morphological operation and fig(h) represents the pairing of
vehicle’s headlights
A RESULTS ON EDGE DETECTION
Above fig (e) reprsents the output of edge detection.Canny edge detector is used to detect the
edges in congested scenes ,because it has good detection,good localization and minimal
response.Canny edge detector uses multi-stage algorithm to detect a wide range of edges in an
image.Those stages are noise reduction,finding the intensity gradient of the image non
maximum suppression and tracing edges through the image and hysteresis thresholding
B RESULTS ON HEADLIGHT EXTRACTION
In nighttime video scenes to represent the presence ofvehicles headlight is a strong and
consistent feature.For that purpose by applying the thresholding method, the headlights of
movingvehicles can be efficiently segmented in nighttime traffic images as shown in fig
International Journal of Information Sciences and Techniques (IJIST) Vol.4, No.3, May 2014
171
(f).Here we are setting the 240 pixels as the threshold value. According to that pixel value , it
segment the pixels of headlights in traffic image sequences.
C RESULTS ON THRESHOLDING
Above fig(f) represents the bright light extraction,to count the number of vehicles in night
scenes ,the headlight is a strong and consistent feature In this study, the pixels of the headlight
images are segmented in grayscale images by the thresholding method. Here the threshold
value is set as 240 pixels.
D RESULTS ON MORPHOLOGICAL OPERATION
After segementing the headlight portions, but still there is some reflections which are mainly
due to the common light sources, such as rear lights, street lamps and some reflections of
headlights .These Reflections are the main source of interference with headlight detection. We
can differentiate between headlights and reflections by their shapes because the shape of
headlights were stable when compare to their reflections .Therefore by using morphological
operation we can suppress the reflections on the road surface as shown in the fig.
Fig(j) graph has been plotted between no.
of frames and count
Fig(i) tabulation represents the number vehicles
detected per frame
V CONCLUSION
The proposed method has produced the vehicle detection and tracking systems in night time for
identifying and classifying the moving vehicles in congested area. Intially the edges of the objects
are detected by using a canny edge detector. The bright-object extraction in the segmentation
process done by the thresholding technique. Then the headlights of the vehicles has tracked by
using the connected component labeling technique and that headlights can be paired by the
bidirectional reasoning algorithm. From this we can get the number of vehicles accurately without
any fault.
S.NO NUMBER
OF
VEHICLES
DETECTION
TIME
ACCURACY
1 22 0.5860 95.5%
2 15 0.4054 96.3%
3 17 0.3665 97.5%
4 16 0.3554 98.2%
International Journal of Information Sciences and Techniques (IJIST) Vol.4, No.3, May 2014
172
VI FUTURE ENHANCEMENT
For further studies, the vehicle type classification function can be further improved and extended
by integrating some sophisticated machine learning techniques such as support vector machine
classifiers on multiple features, including vehicle lights and vehicle bodies, to further enhance the
classification capability on more detailed vehicle types, such as sedans, buses, trucks, lorries, and
light and heavy motorbikes therefore by using this result we can separate four wheelers and two
wheelers.
REFERENCES
[1] S. Tsugawa, “Vision-based vehicles in Japan: Machine vision systems and driving control systems,”
IEEE Trans. Ind. Electron., vol. 41, no. 4, pp. 398–405, Aug. 1994.
[2] Z. Sun, G. Bebis, and R. Miller, “On-road vehicle detection: A review,” IEEE Trans. Pattern Anal.
Mach. Intell., vol. 28, no. 5, pp. 694–711, May 2006.
[3] Broggi, M. Bertozzi, A. Fascioli, and G. Conte, Automatic Vehicle Guidance: The Experience of the
ARGO Autonomous Vehicle. Singapore: World Scientific, 1999.
[4] H. Mori, M. Charkari, and T. Matsushita, “On-line vehicle and pedestrian detections based on sign
pattern,” IEEE Trans. Ind. Electron., vol. 41, no. 4, pp. 384–391, Aug. 1994.
[5] A. Broggi, M. Cellario, P. Lombardi, and M. Porta, “An evolutionary approach to visual sensing for
vehicle navigation,” IEEE Trans. Ind. Electron., vol. 50, no. 1, pp. 18–29, Feb. 2003.
[6] T. Bücher, C. Curio, J. Edelbrunner, C. Igel, D. Kastrup, I. Leefken, G. Lorenz, A. Steinhage, and W.
Seelen, “Image processing and behavior planning for intelligent vehicles,” IEEE Trans. Ind. Electron.,
vol. 50, no. 1, pp. 62–75, Feb. 2003.
[7] S. Gupte, O. Masoud, R. F. K. Martin, and N. P. Papanikolopoulos, “Detection and classification of
vehicles,” IEEE Trans. Intell. Transp. Syst., vol. 3, no. 1, pp. 37–47, Mar. 2002.
[8] B.-F. Wu, S.-P. Lin, and Y.-H. Chen, “A real-time multiple-vehicle detection and tracking system
with prior occlusion detection and resolution,” in Proc. IEEE Int. Symp. Signal Process. Inf. Technol.,
Dec. 2005, pp. 311–316.
[9] J. Zhou, D. Gao, and D. Zhang, “Moving vehicle detection for automatic traffic monitoring,” IEEE
Trans. Veh. Technol., vol. 56, no. 1, pp. 51–59, Jan. 2007.
[10] W. F. Gardner and D. T. Lawton, “Interactive model-based vehicle tracking,” IEEE Trans. Pattern
Anal. Mach. Intell., vol. 18, no. 11, pp. 1115– 1121, Nov. 1996.
[11] G. D. Sullivan, K. D. Baker, A. D. Worrall, C. I. Attwood, and P. M. Remagnino, “Model-based
vehicle detection and classification using orthographic approximations,” Image Vis. Comput., vol. 15,
no. 8, pp. 649–654, Aug. 1997.
[12] D. Koller, J. Weber, T. Huang, J. Malik, G. Ogasawara, B. Rao, and S. Russell, “Towards robust
automatic traffic scene analysis in real-time,” in Proc. Int. Conf. Pattern Recog., 1994, vol. 1, pp.
126–131.
[13] N. Peterfreund, “Robust tracking of position and velocity with Kalman snakes,” IEEE Trans. Pattern
Anal. Mach. Intell., vol. 21, no. 6, pp. 564– 569, Jun. 1999.
[14] S.-T. Tseng and K.-T. Song, “Real-time image tracking for traffic monitoring,” in Proc. IEEE 5th Int.
Conf. Intell. Transp. Syst., 2002, pp. 10–14.
[15] L.-W. Tsai, J.-W. Hsieh, and K.-C. Fan, “Vehicle detection using normalized color and edge map,”
IEEE Trans. Image Process., vol. 16, no. 3, pp. 850–864, Mar. 2007.
[16] D. Beymer, P. McLauchlan, B. Coifman, and J. Malik, “A real-time computer vision system for
measuring traffic parameters,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., Jun. 1997, pp. 495–
510.

More Related Content

PDF
Vehicle Recognition at Night Based on Tail LightDetection Using Image Processing
PDF
Automatic Vehicle Detection Using Pixelwise Classification Approach
PDF
IRJET- Simultaneous Localization and Mapping for Automatic Chair Re-Arran...
PDF
A017430110
PDF
40120140501008
PDF
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
PDF
License plate recognition.
PDF
License Plate Recognition using Morphological Operation.
Vehicle Recognition at Night Based on Tail LightDetection Using Image Processing
Automatic Vehicle Detection Using Pixelwise Classification Approach
IRJET- Simultaneous Localization and Mapping for Automatic Chair Re-Arran...
A017430110
40120140501008
Leader Follower Formation Control of Ground Vehicles Using Dynamic Pixel Coun...
License plate recognition.
License Plate Recognition using Morphological Operation.

What's hot (19)

PDF
Robot Pose Estimation: A Vertical Stereo Pair Versus a Horizontal One
PDF
HUMAN BODY DETECTION AND SAFETY CARE SYSTEM FOR A FLYING ROBOT
PPTX
Automatic no. plate recognition
PDF
CANNY EDGE DETECTION BASED REAL-TIME INTELLIGENT PARKING MANAGEMENT SYSTEM
PDF
Enhancement performance of road recognition system of autonomous robots in sh...
PDF
IMPLEMENTATION OF LANE TRACKING BY USING IMAGE PROCESSING TECHNIQUES IN DEVEL...
PDF
Applications of Image Processing and Real-Time embedded Systems in Autonomous...
PDF
EXPLORING SOUND SIGNATURE FOR VEHICLE DETECTION AND CLASSIFICATION USING ANN
PDF
Real-Time Multiple License Plate Recognition System
PPTX
Automatic vehicle license plate detection using VEDA
PDF
Autonomous Path Planning and Navigation of a Mobile Robot with Multi-Sensors ...
PDF
Help the Genetic Algorithm to Minimize the Urban Traffic on Intersections
PPTX
AUTOMATIC CAR LICENSE PLATE RECOGNITION USING VEDA
PDF
Paper id 252014106
PDF
AN INNOVATIVE RESEARCH FRAMEWORK ON INTELLIGENT TEXT DATA CLASSIFICATION SYST...
PDF
A Robot Collision Avoidance Method Using Kinect and Global Vision
PDF
Path Planning for Mobile Robot Navigation Using Voronoi Diagram and Fast Marc...
PDF
Ijecet 06 10_003
PDF
Traffic Light Controller System using Optical Flow Estimation
Robot Pose Estimation: A Vertical Stereo Pair Versus a Horizontal One
HUMAN BODY DETECTION AND SAFETY CARE SYSTEM FOR A FLYING ROBOT
Automatic no. plate recognition
CANNY EDGE DETECTION BASED REAL-TIME INTELLIGENT PARKING MANAGEMENT SYSTEM
Enhancement performance of road recognition system of autonomous robots in sh...
IMPLEMENTATION OF LANE TRACKING BY USING IMAGE PROCESSING TECHNIQUES IN DEVEL...
Applications of Image Processing and Real-Time embedded Systems in Autonomous...
EXPLORING SOUND SIGNATURE FOR VEHICLE DETECTION AND CLASSIFICATION USING ANN
Real-Time Multiple License Plate Recognition System
Automatic vehicle license plate detection using VEDA
Autonomous Path Planning and Navigation of a Mobile Robot with Multi-Sensors ...
Help the Genetic Algorithm to Minimize the Urban Traffic on Intersections
AUTOMATIC CAR LICENSE PLATE RECOGNITION USING VEDA
Paper id 252014106
AN INNOVATIVE RESEARCH FRAMEWORK ON INTELLIGENT TEXT DATA CLASSIFICATION SYST...
A Robot Collision Avoidance Method Using Kinect and Global Vision
Path Planning for Mobile Robot Navigation Using Voronoi Diagram and Fast Marc...
Ijecet 06 10_003
Traffic Light Controller System using Optical Flow Estimation
Ad

Similar to Tracking and counting the (20)

PDF
IRJET- Reckoning the Vehicle using MATLAB
PDF
A0140109
PDF
A real-time system for vehicle detection with shadow removal and vehicle clas...
PDF
IRJET- A Survey of Approaches for Vehicle Traffic Analysis
PDF
IRJET- A Survey of Approaches for Vehicle Traffic Analysis
PDF
Traffic Light Detection and Recognition for Self Driving Cars using Deep Lear...
PDF
Kq3518291832
PDF
Traffic flow measurement for smart traffic light system design
PDF
Vehicle detection and tracking techniques a concise review
PDF
Traffic Violation Detection Using Multiple Trajectories of Vehicles
PDF
G04743943
PPTX
Vehicle counting for traffic management
PDF
IRJET- Front View Identification of Vehicles by using Machine Learning Te...
PDF
Vehicle Counting Module Design in Small Scale for Traffic Management in Smart...
PPTX
Intelligent Traffic light detection for individuals with CVD
PDF
License plate extraction of overspeeding vehicles
PDF
Vehicle Tracking Using Kalman Filter and Features
PDF
Fb4301931934
PDF
IRJET - Traffic Density Estimation by Counting Vehicles using Aggregate Chann...
IRJET- Reckoning the Vehicle using MATLAB
A0140109
A real-time system for vehicle detection with shadow removal and vehicle clas...
IRJET- A Survey of Approaches for Vehicle Traffic Analysis
IRJET- A Survey of Approaches for Vehicle Traffic Analysis
Traffic Light Detection and Recognition for Self Driving Cars using Deep Lear...
Kq3518291832
Traffic flow measurement for smart traffic light system design
Vehicle detection and tracking techniques a concise review
Traffic Violation Detection Using Multiple Trajectories of Vehicles
G04743943
Vehicle counting for traffic management
IRJET- Front View Identification of Vehicles by using Machine Learning Te...
Vehicle Counting Module Design in Small Scale for Traffic Management in Smart...
Intelligent Traffic light detection for individuals with CVD
License plate extraction of overspeeding vehicles
Vehicle Tracking Using Kalman Filter and Features
Fb4301931934
IRJET - Traffic Density Estimation by Counting Vehicles using Aggregate Chann...
Ad

More from ijistjournal (20)

PPTX
Submit Your Research Articles - International Journal of Information Sciences...
PDF
MATHEMATICAL EXPLANATION TO SOLUTION FOR EX-NOR PROBLEM USING MLFFN
PPTX
Call for Papers - International Journal of Information Sciences and Technique...
PDF
3rd International Conference on NLP, AI & Information Retrieval (NLAII 2025)
PDF
SURVEY ON LI-FI TECHNOLOGY AND ITS APPLICATIONS
PPTX
Research Article Submission - International Journal of Information Sciences a...
PDF
A BRIEF REVIEW OF SENTIMENT ANALYSIS METHODS
PDF
14th International Conference on Information Technology Convergence and Servi...
PPTX
Online Paper Submission - International Journal of Information Sciences and T...
PDF
New Era of Teaching Learning : 3D Marker Based Augmented Reality
PPTX
Submit Your Research Articles - International Journal of Information Sciences...
PDF
GOOGLE CLOUD MESSAGING (GCM): A LIGHT WEIGHT COMMUNICATION MECHANISM BETWEEN ...
PDF
6th International Conference on Artificial Intelligence and Machine Learning ...
PPTX
Call for Papers - International Journal of Information Sciences and Technique...
PDF
SURVEY OF ANDROID APPS FOR AGRICULTURE SECTOR
PDF
6th International Conference on Machine Learning Techniques and Data Science ...
PDF
International Journal of Information Sciences and Techniques (IJIST)
PPTX
Research Article Submission - International Journal of Information Sciences a...
PDF
SURVEY OF DATA MINING TECHNIQUES USED IN HEALTHCARE DOMAIN
PDF
International Journal of Information Sciences and Techniques (IJIST)
Submit Your Research Articles - International Journal of Information Sciences...
MATHEMATICAL EXPLANATION TO SOLUTION FOR EX-NOR PROBLEM USING MLFFN
Call for Papers - International Journal of Information Sciences and Technique...
3rd International Conference on NLP, AI & Information Retrieval (NLAII 2025)
SURVEY ON LI-FI TECHNOLOGY AND ITS APPLICATIONS
Research Article Submission - International Journal of Information Sciences a...
A BRIEF REVIEW OF SENTIMENT ANALYSIS METHODS
14th International Conference on Information Technology Convergence and Servi...
Online Paper Submission - International Journal of Information Sciences and T...
New Era of Teaching Learning : 3D Marker Based Augmented Reality
Submit Your Research Articles - International Journal of Information Sciences...
GOOGLE CLOUD MESSAGING (GCM): A LIGHT WEIGHT COMMUNICATION MECHANISM BETWEEN ...
6th International Conference on Artificial Intelligence and Machine Learning ...
Call for Papers - International Journal of Information Sciences and Technique...
SURVEY OF ANDROID APPS FOR AGRICULTURE SECTOR
6th International Conference on Machine Learning Techniques and Data Science ...
International Journal of Information Sciences and Techniques (IJIST)
Research Article Submission - International Journal of Information Sciences a...
SURVEY OF DATA MINING TECHNIQUES USED IN HEALTHCARE DOMAIN
International Journal of Information Sciences and Techniques (IJIST)

Recently uploaded (20)

PDF
Hindi spoken digit analysis for native and non-native speakers
PDF
Transform Your ITIL® 4 & ITSM Strategy with AI in 2025.pdf
PDF
From MVP to Full-Scale Product A Startup’s Software Journey.pdf
PDF
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
PDF
Hybrid model detection and classification of lung cancer
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
project resource management chapter-09.pdf
PPTX
SOPHOS-XG Firewall Administrator PPT.pptx
PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
A comparative analysis of optical character recognition models for extracting...
PPTX
A Presentation on Touch Screen Technology
PDF
ENT215_Completing-a-large-scale-migration-and-modernization-with-AWS.pdf
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PPTX
Chapter 5: Probability Theory and Statistics
PDF
NewMind AI Weekly Chronicles - August'25-Week II
PDF
Mushroom cultivation and it's methods.pdf
PDF
Getting Started with Data Integration: FME Form 101
PDF
Heart disease approach using modified random forest and particle swarm optimi...
PPTX
A Presentation on Artificial Intelligence
PDF
DASA ADMISSION 2024_FirstRound_FirstRank_LastRank.pdf
Hindi spoken digit analysis for native and non-native speakers
Transform Your ITIL® 4 & ITSM Strategy with AI in 2025.pdf
From MVP to Full-Scale Product A Startup’s Software Journey.pdf
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
Hybrid model detection and classification of lung cancer
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
project resource management chapter-09.pdf
SOPHOS-XG Firewall Administrator PPT.pptx
Unlocking AI with Model Context Protocol (MCP)
A comparative analysis of optical character recognition models for extracting...
A Presentation on Touch Screen Technology
ENT215_Completing-a-large-scale-migration-and-modernization-with-AWS.pdf
Building Integrated photovoltaic BIPV_UPV.pdf
Chapter 5: Probability Theory and Statistics
NewMind AI Weekly Chronicles - August'25-Week II
Mushroom cultivation and it's methods.pdf
Getting Started with Data Integration: FME Form 101
Heart disease approach using modified random forest and particle swarm optimi...
A Presentation on Artificial Intelligence
DASA ADMISSION 2024_FirstRound_FirstRank_LastRank.pdf

Tracking and counting the

  • 1. International Journal of Information Sciences and Techniques (IJIST) Vol.4, No.3, May 2014 DOI : 10.5121/ijist.2014.4320 165 TRACKING AND COUNTING THE VEHICLES IN NIGHT SCENES S. Kanagamalliga1 , Dr. S. Vasuki2 , A. Kanimozhidevi3 , S. Priyadharshni 4 , S. Rajeswari5 1 Assistant Professor, 2 Professor and Head, 3,4,5 U.G Student Department of Electronics and Communication Engineering, Velammal College of Engineering and Technology, Madurai ABSTRACT: The main objective of traffic surveillance system is to reduce the risk caused by accident. Many papers published were concerned only about the vehicle detection during daytime. But we proposed a method to detect the vehicle during night time. The main objective of our paper is to count the vehicles in night time traffic scenes. It consists of three phases, they are vehicle’s headlight segmentation, tracking and pairing. We know that in night time, only visible thing will be the headlight of the vehicle therefore we are counting the vehicle based on the headlight information. Initially the headlights of a vehicle is segmented based on the analysis of headlight size, location and area. Then tracked via a tracking procedure designed to detect vehicle and then headlights are paired using the connected component labeling. Based on the pairing result we can count the vehicles. Here we are using fuzzy hybrid information inference mechanism for error compensation. Here by using fuzzy logic we can generate the rules based on the size, color and position. In particular we are generating the rules based on the distance between the headlights of a vehicle.Thereby we can improve the accuracy while counting the vehicle. Keywords: Headlight pairing,headlight tracking, bidirectional algorithm ,intelligent transportation system I INTRODUCTION: Traffic surveillance system extracts accurate traffic information for traffic flow control such as vehicle count, vehicle classification and vehicle speed..Hence in the day time counting the vehicle will be somehow easier than in the night time because the vehicles will be visible in the day time whereas in the night time those vehicles will not be visible due to dim light and also that there will be a strong reflection on road surface due to vehicle’s headlights. Hence it will be difficult to calculate the vehicles in the night time.And also it is noted that in night time only the headlight of the vehicle will have high intensity value.Therefore by that information we can detect the vehicles in night scenes.In our existing paper,they concentrated on night time vehicle headlight detection and tracking in gray scale image. During the tracking and pairing the headlight, bidirectional algorithm has been used.In particular they used a vanishing point calculation for tracking and pairing the vehicle’s headlight.There is a problem in the vanishing
  • 2. International Journal of Information Sciences and Techniques (IJIST) Vol.4, No.3, May 2014 166 point calculation due to this counting will not be an accurate one we will see about this in detail existing work description.. In our paper, we present a approach for vehicle headlight detection, tracking, and pairing in night time traffic surveillance video. Through a simple yet effective bidirectional reasoning algorithm by incorporating the size, position, edge based, and motion information. Finally, the trajectory of the vehicle’s headlight is employed to calibrate the surveillance camera to differentiate moving vehicles from motionless background scenes based on change detection. Other studies [3][4] use spatial–temporal difference features to segment moving vehicles, while the methods in utilize techniques based on background subtraction to extract moving vehicles. III EXISTING WORK DESCRPTION The headlights of the vehicle’s are detected,tracked and paired in night night time by the existing work.It contains two features in headlight detection of the vehicle. They are reflection intensity map and suppressed map. The reflection intensity map is obtained by the light attenuation model and the reflection suppressed map is obtained by the laplacian of Gaussian filter.From this the reflections on the road surface can removed.Then the headlights are tracked and paired by using the bidirection reasoning algorithm.Through this the number of vehicles can be counted. Finally, vehicle’s speed has estimated by using the surveillance camera.The bidirectional reasoning obtained by calculating the position,size, vanishing point and the motion information.The following tabulation represents the number of vehicles counted per frame by using the vanishing point calculation. Fig(i) tabulation represents the number of vehicles detected per frame III PROBLEM IN THE EXISITING WORK In this exisiting work vanishing point calculation has been used for pairing the headlights of the vehicle. There might be a problem occur if one of the headlights of the four wheeler is wrongly paired with the two wheeler headlight.At that time it may include that wrongly paired vehicle headlights also one of the count.The pairing of vehicle headlights by using the vanishing point calculation is shown in this figure(a).From this it is noted that the one of the headlights of the some cars are wrongly paired with the one of the headlights of the other cars.So the existing work will not give the correct result. S.NO NUMBER OF VEHICLES DETECTION TIME ACCURACY 1 25 1.2944 92.5% 2 19 0.3419 93.4% 3 21 0.3230 94.5% 4 18 0.3012 95.9%
  • 3. International Journal of Information Sciences and Techniques (IJIST) Vol.4, No.3, May 2014 167 Fig(a) shows the pairing of vehicle’s headlight using vanishing point calculation. Fig(b ) graph has been plotted number of frames versus count III PROJECT DESCRIPTION In our proposed we are counting the vehicles in night time in a traffic area. The proposed method has three phases. They are preprocessing, segmentation, tracking and pairing the vehicle’s headlight. Intially the captured video should be converted into the frame and we subject this frames for preprocessing. By using the filters the noise has been removed from the frame. Here we are counting the vehicle in night time. Therefore in night time only vehicle’s headlight will be visible.Hence with the help of headlight information only we can count the vehicles for that first we have to segment those vehicle’s headlight. For segmenting the vehicle’s headlight we are using the edge based technique. The main objective of segmentation is to eliminate the road reflection due to vehicle’s headlight. Once the headlight has been segmented then those headlight have to be tracked and paired. For pairing the vehicle headlight we are using the bidirectional reasoning algorithm. for tracking the vehicle’s headlight we are using the connected component labeling(CCL).Based on paired results we can count the vehicle in night scenes. And also it is noted for error compensation we using the fuzzy hybrid interference information mechanism(FHIIM).By using the fuzzy logic we can generate the rule. Rule can be generated based on size, shape, color, distance between two headlights in a vehicle.
  • 4. International Journal of Information Sciences and Techniques (IJIST) Vol.4, No.3, May 2014 168 In particular we are generating the rules based on the distance between the two headlights of a vehicle. Thereby we can improve the accuracy while counting the vehicle. (c ) (d) Fig(c) and(d) represents the input video and the preprocessed image. IV MODULE DESCRIPTION A.LOAD VIDEO: The input video can be loaded and it is converted into frames by using [X] = aviread(...) A = aviread(pathname,filename) reads a grayscale or color image from the file specified by the string filename. If the file is not in the current folder, or in a folder on the MATLAB path, specify the full pathname. The input video is in .avi format. Here are Converting into frames as rate as 15frames paer second.The video which we used here is a car traffic video.For our video we can get (20 X 10) frames,Totally we can get 200 frames. B.PREPROCESSING: Preprocessing is nothing but removal of noise from frames. There are several filters to remove the noise, in particular we are using the Gaussian filter. The noise may be due to displacement of lens in our camera, or while transformation of an image.As shown in fig(d) . The Gaussian filter is a non-uniform low pass filter. Generally noise may get included due to change in atmospheric condition.It is important to remove noise to make sure that vehicle’s headlights are very clear. C.EDGE BASED DETECTION: Edge detection is the approach used most frequently for segmenting images based on the abrupt(local) changes in intensity. Edge detection is used to detect and link edge pixels to form contours. There are several steps involved in edge detection they are smoothing, enhancement, detection and localization. In this paper the edges are detected based on gradient edge detection
  • 5. International Journal of Information Sciences and Techniques (IJIST) Vol.4, No.3, May 2014 169 because laplacian edge detection are fail to detect the edge with high intensity point variations and also it is impossible to detect the image which has so much of noise.It is noted that there are several detectors in gradient edge detection.Here we are using canny edge detector to detect the edges in congested scenes D.BRIGHT OBJECT EXTRACTION: For extracting the headlights here we are using the thresholding technique. We know that in the night time, headlights pixels will have a high intensity value. Based on that we can segment the headlight. Suppose that the gray-level histogram corresponds to an image, f(x,y), composed of dark objects in a light background, in such a way that object and background pixels have gray levels grouped into two dominant modes. One obvious way to extract the objects from the background is to select a threshold ‘T’ that separates these modes. Then any point (x,y) for which f(x,y) > T is called an object point, otherwise, the point is called a background point. As a result we can get the bright objects which consists of vehicle’s headlight and also other bright objects. E.MORPHOLOGICAL OPERATION Even though the bright objects are extracted it is important to extract the headlights of a vehicle separately. We know that in night time road reflection cause a major problem. Hence it is important to remove those reflection, for that here we are using the morphological operation. It is noted that some these reflection may reduce the accuracy of counting. That why it is important to suppress the road reflection. There are several types of morphological operators in particular we are using opening and closing operator.As a result we can get only the headlight of the vehicle. F.VEHICLE TRACKING AND PAIRING: From the morphological operation we could get the binary image. For tracking and pairing the vehicle headlight we are using bidirectional algorithm. The algorithm works in the following way. According to the intensity value, the pixels are grouped. If the grouped pixels are of similar size then it is consider as two headlights are belongs to the same vehicle. Thus the headlights are detected from the vehicle. From the grouped pixels centroid point is calculated. If any of the centroid point are lie on the same line then the two headlights are paired as a result vehicle count has been increamented. vehicle tracking and identification process includes three phases. First, the vehicle component tracking process is associated with the motion relation of vehicle components in succeeding frames by analyzing their spatial and temporal features. Then, the phase of motion-based grouping process is applied to the tracked vehicle components to construct whole moving vehicles. These moving vehicles are then tracked in the vehicle tracking phase. Finally, the vehicle recognition phase identifies and classifies the types of tracked vehicles.
  • 6. International Journal of Information Sciences and Techniques (IJIST) Vol.4, No.3, May 2014 170 V.EXPERIMENTAL RESULT . (e) (f) The Fig(e) represents the edge detection output and fig(f) represents the bright light extraction (g) (h) The fig (g ) represents the output of morphological operation and fig(h) represents the pairing of vehicle’s headlights A RESULTS ON EDGE DETECTION Above fig (e) reprsents the output of edge detection.Canny edge detector is used to detect the edges in congested scenes ,because it has good detection,good localization and minimal response.Canny edge detector uses multi-stage algorithm to detect a wide range of edges in an image.Those stages are noise reduction,finding the intensity gradient of the image non maximum suppression and tracing edges through the image and hysteresis thresholding B RESULTS ON HEADLIGHT EXTRACTION In nighttime video scenes to represent the presence ofvehicles headlight is a strong and consistent feature.For that purpose by applying the thresholding method, the headlights of movingvehicles can be efficiently segmented in nighttime traffic images as shown in fig
  • 7. International Journal of Information Sciences and Techniques (IJIST) Vol.4, No.3, May 2014 171 (f).Here we are setting the 240 pixels as the threshold value. According to that pixel value , it segment the pixels of headlights in traffic image sequences. C RESULTS ON THRESHOLDING Above fig(f) represents the bright light extraction,to count the number of vehicles in night scenes ,the headlight is a strong and consistent feature In this study, the pixels of the headlight images are segmented in grayscale images by the thresholding method. Here the threshold value is set as 240 pixels. D RESULTS ON MORPHOLOGICAL OPERATION After segementing the headlight portions, but still there is some reflections which are mainly due to the common light sources, such as rear lights, street lamps and some reflections of headlights .These Reflections are the main source of interference with headlight detection. We can differentiate between headlights and reflections by their shapes because the shape of headlights were stable when compare to their reflections .Therefore by using morphological operation we can suppress the reflections on the road surface as shown in the fig. Fig(j) graph has been plotted between no. of frames and count Fig(i) tabulation represents the number vehicles detected per frame V CONCLUSION The proposed method has produced the vehicle detection and tracking systems in night time for identifying and classifying the moving vehicles in congested area. Intially the edges of the objects are detected by using a canny edge detector. The bright-object extraction in the segmentation process done by the thresholding technique. Then the headlights of the vehicles has tracked by using the connected component labeling technique and that headlights can be paired by the bidirectional reasoning algorithm. From this we can get the number of vehicles accurately without any fault. S.NO NUMBER OF VEHICLES DETECTION TIME ACCURACY 1 22 0.5860 95.5% 2 15 0.4054 96.3% 3 17 0.3665 97.5% 4 16 0.3554 98.2%
  • 8. International Journal of Information Sciences and Techniques (IJIST) Vol.4, No.3, May 2014 172 VI FUTURE ENHANCEMENT For further studies, the vehicle type classification function can be further improved and extended by integrating some sophisticated machine learning techniques such as support vector machine classifiers on multiple features, including vehicle lights and vehicle bodies, to further enhance the classification capability on more detailed vehicle types, such as sedans, buses, trucks, lorries, and light and heavy motorbikes therefore by using this result we can separate four wheelers and two wheelers. REFERENCES [1] S. Tsugawa, “Vision-based vehicles in Japan: Machine vision systems and driving control systems,” IEEE Trans. Ind. Electron., vol. 41, no. 4, pp. 398–405, Aug. 1994. [2] Z. Sun, G. Bebis, and R. Miller, “On-road vehicle detection: A review,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 28, no. 5, pp. 694–711, May 2006. [3] Broggi, M. Bertozzi, A. Fascioli, and G. Conte, Automatic Vehicle Guidance: The Experience of the ARGO Autonomous Vehicle. Singapore: World Scientific, 1999. [4] H. Mori, M. Charkari, and T. Matsushita, “On-line vehicle and pedestrian detections based on sign pattern,” IEEE Trans. Ind. Electron., vol. 41, no. 4, pp. 384–391, Aug. 1994. [5] A. Broggi, M. Cellario, P. Lombardi, and M. Porta, “An evolutionary approach to visual sensing for vehicle navigation,” IEEE Trans. Ind. Electron., vol. 50, no. 1, pp. 18–29, Feb. 2003. [6] T. Bücher, C. Curio, J. Edelbrunner, C. Igel, D. Kastrup, I. Leefken, G. Lorenz, A. Steinhage, and W. Seelen, “Image processing and behavior planning for intelligent vehicles,” IEEE Trans. Ind. Electron., vol. 50, no. 1, pp. 62–75, Feb. 2003. [7] S. Gupte, O. Masoud, R. F. K. Martin, and N. P. Papanikolopoulos, “Detection and classification of vehicles,” IEEE Trans. Intell. Transp. Syst., vol. 3, no. 1, pp. 37–47, Mar. 2002. [8] B.-F. Wu, S.-P. Lin, and Y.-H. Chen, “A real-time multiple-vehicle detection and tracking system with prior occlusion detection and resolution,” in Proc. IEEE Int. Symp. Signal Process. Inf. Technol., Dec. 2005, pp. 311–316. [9] J. Zhou, D. Gao, and D. Zhang, “Moving vehicle detection for automatic traffic monitoring,” IEEE Trans. Veh. Technol., vol. 56, no. 1, pp. 51–59, Jan. 2007. [10] W. F. Gardner and D. T. Lawton, “Interactive model-based vehicle tracking,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 18, no. 11, pp. 1115– 1121, Nov. 1996. [11] G. D. Sullivan, K. D. Baker, A. D. Worrall, C. I. Attwood, and P. M. Remagnino, “Model-based vehicle detection and classification using orthographic approximations,” Image Vis. Comput., vol. 15, no. 8, pp. 649–654, Aug. 1997. [12] D. Koller, J. Weber, T. Huang, J. Malik, G. Ogasawara, B. Rao, and S. Russell, “Towards robust automatic traffic scene analysis in real-time,” in Proc. Int. Conf. Pattern Recog., 1994, vol. 1, pp. 126–131. [13] N. Peterfreund, “Robust tracking of position and velocity with Kalman snakes,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 21, no. 6, pp. 564– 569, Jun. 1999. [14] S.-T. Tseng and K.-T. Song, “Real-time image tracking for traffic monitoring,” in Proc. IEEE 5th Int. Conf. Intell. Transp. Syst., 2002, pp. 10–14. [15] L.-W. Tsai, J.-W. Hsieh, and K.-C. Fan, “Vehicle detection using normalized color and edge map,” IEEE Trans. Image Process., vol. 16, no. 3, pp. 850–864, Mar. 2007. [16] D. Beymer, P. McLauchlan, B. Coifman, and J. Malik, “A real-time computer vision system for measuring traffic parameters,” in Proc. IEEE Conf. Comput. Vis. Pattern Recog., Jun. 1997, pp. 495– 510.