DETECTION OF
ABANDONED OBJECTS
I N C R O W D E D E N V I R O N M E N T S
INTRODUCTION
• Visual surveillance systems today consist of a large number of cameras, usually monitored
by a relatively small team of human operators.
• Recent studies have shown that the average human can focus on tracking the movements
of up to four dynamic targets simultaneously, and can efficiently detect changes to the
attended targets but not the neighboring distractors.
• When targets and distractors are too close, it becomes difficult to individuate the targets
and maintain tracking efficiently.
• Further, according to the classical spotlight theory of visual attention, people can attend to
only one region of space (i.e. area in view) at a time, or at most, two.
• Simply stated, the human visual processing capability and attentiveness required for the
effective monitoring of crowded scenes or multiple screens within a surveillance system is
limited.
PROPOSED ALGORITHM
4 Sub-events
Algorithm
1. Detection of unattended bag.
2. Reverse traversal through
previous frames to discover the
likely owner.
COMPUTATIONAL MODULE
I. Detection of Unattended Baggage
• The goal of the first module of the algorithm is the detection of any stationary baggage. Until such an
event occurs, it is unnecessary to track and monitor all ongoing activities in the scene. Doing so not
only cuts computational costs but also avoids ambiguities born of inaccuracies in tracking in the
presence of much movement and occlusion.
• The representation of bags is established using typical shape and size characteristics. The classifier is
trained off-line, using the following features:
• Compactness – the ratio of area to squared perimeter (multiplied by 4π for normalization)
• Solidity ratio – the extent to which the blob area covers the convex hull area
• Eccentricity – the ratio of major axis to minor axis of an ellipse that envelopes the blob
• To ensure that the bag remains stationary while left alone as well as to reinforce the decision of the
classifier, each suspect blob is tracked over a number of consecutive frames (usually, around 10) to
check for the consistency of detection and position, before declaring it as unattended and moving on
to look for its potential owner(s).
Detection of Abandoned Bag
CURRENT APPROACH: BLOB
ANALYSIS SYSTEM
• Extract a region of interest (ROI), thus eliminating video areas that are unlikely to
contain abandoned objects.
• Perform video segmentation using background subtraction.
• Track objects based on their area and centroid statistics.
• Visualize the results.
EXTRACT A REGION OF INTEREST
(ROI)
• It is defined as roi(x y width height)x y are image that has to be focused(portion of
image that has to be performed)
PERFORM VIDEO SEGMENTATION USING
BACKGROUND SUBTRACTION
• Create a Color Space Converter System object to convert the RGB image to Y'CbCr
format.
• Create Threshold scale factor.
• Create a Morphological Close System object to fill in small gaps in the detected
objects.
Track objects based on their area and centroid statistics.
Detection of Abandoned Bag
Detection of Abandoned Bag
TRACK OBJECTS BASED ON THEIR AREA
AND CENTROID STATISTICS.
VISUALIZE THE RESULTS
CODES• roi = [100 80 360 240]; % defining region of interest roi(x y width height)x y are image that has to be focused(portion of image that has to be performed)
• % Maximum number of objects to track
• maxNumObj = 200;
• % Number of frames that an object must remain stationary before an alarm is
• % raised
• alarmCount = 45;
• % Maximum number of frames that an abandoned object can be hidden before it
• % is no longer tracked
• maxConsecutiveMiss = 4;
• areaChangeFraction = 20; % Maximum allowable change in object area in percent
• centroidChangeFraction = 30; % Maximum allowable change in object centroid in percent
• % Minimum ratio between the number of frames in which an object is detected
• % and the total number of frames, for that object to be tracked.
• minPersistenceRatio = 0.3;
• % Offsets for drawing bounding boxes in original input video
• PtsOffset = int32(repmat([roi(1), roi(2), 0, 0],[maxNumObj 1]));%Convert to 32-bit signed integer, repmat: Repeat copies of a matrix
• %%
• % Create a VideoFileReader System object to read video from a file.
• hVideoSrc = vision.VideoFileReader;
• hVideoSrc.Filename = 'Abandoned_Bag1.mp4';
• hVideoSrc.VideoOutputDataType = 'single';
• %%
• % Create a ColorSpaceConverter System object to convert the RGB image to
• % Y'CbCr format.
• hColorConv = vision.ColorSpaceConverter('Conversion', 'RGB to YCbCr'); %Y?CbCr color space is a mathematical coordinate transformation from an associated RGB color
space.
• %%
• % Create a ColorSpaceConverter System object to convert the RGB image to
• % Y'CbCr format.
• hAutothreshold = vision.Autothresholder('ThresholdScaleFactor', 1.3);
• %%
• % Create a MorphologicalClose System object to fill in small gaps in the detected objects.
• hClosing = vision.MorphologicalClose('Neighborhood', strel('square',5));
• %%
• % Create a BlobAnalysis System object to find the area, centroid, and bounding
• % box of the objects in the video.
• hBlob = vision.BlobAnalysis('MaximumCount', maxNumObj, 'ExcludeBorderBlobs', true);
• hBlob.MinimumBlobArea = 100;
• hBlob.MaximumBlobArea = 2500;
• %%
• % Create System objects to display results.
• pos = [10 300 roi(3)+25 roi(4)+25];
• hAbandonedObjects = vision.VideoPlayer('Name', 'Abandoned Objects', 'Position', pos);
• pos(1) = 46+roi(3); % move the next viewer to the right
• hAllObjects = vision.VideoPlayer('Name', 'All Objects', 'Position', pos);
• pos = [80+2*roi(3) 300 roi(3)-roi(1)+25 roi(4)-roi(2)+25];
• hThresholdDisplay = vision.VideoPlayer('Name', 'Threshold', 'Position', pos);
• %% Video Processing Loop
• % Create a processing loop to perform abandoned object detection on the input
• % video. This loop uses the System objects you instantiated above.
• firsttime = true;
• while ~isDone(hVideoSrc)
• Im = step(hVideoSrc);
•
• % Select the region of interest from the original video
• OutIm = Im(roi(2):end, roi(1):end, :);
• YCbCr = step(hColorConv, OutIm);
• CbCr = complex(YCbCr(:,:,2), YCbCr(:,:,3));
• % Store the first video frame as the background
• if firsttime
• firsttime = false;
• BkgY = YCbCr(:,:,1);
• BkgCbCr = CbCr;
• end
• SegY = step(hAutothreshold, abs(YCbCr(:,:,1)-BkgY));
• SegCbCr = abs(CbCr-BkgCbCr) > 0.05;
•
• % Fill in small gaps in the detected objects
• Segmented = step(hClosing, SegY | SegCbCr);
• % Perform blob analysis
• [Area, Centroid, BBox] = step(hBlob, Segmented);
• % Call the helper function that tracks the identified objects and
• % returns the bounding boxes and the number of the abandoned objects.
• [OutCount, OutBBox] = videoobjtracker(Area, Centroid, BBox, maxNumObj, ...
• areaChangeFraction, centroidChangeFraction, maxConsecutiveMiss, ...
• minPersistenceRatio, alarmCount);
• % Display the abandoned object detection results
• Imr = insertShape(Im,'FilledRectangle',OutBBox+PtsOffset,...
• 'Color','red','Opacity',0.5);
• % Call the helper function that tracks the identified objects and
• % returns the bounding boxes and the number of the abandoned objects.
• [OutCount, OutBBox] = videoobjtracker(Area, Centroid, BBox, maxNumObj, ...
• areaChangeFraction, centroidChangeFraction, maxConsecutiveMiss, ...
• minPersistenceRatio, alarmCount);
• % Display the abandoned object detection results
• Imr = insertShape(Im,'FilledRectangle',OutBBox+PtsOffset,...
• 'Color','red','Opacity',0.5);
• % insert number of abandoned objects in the frame
• Imr = insertText(Imr, [1 1], OutCount);
• step(hAbandonedObjects, Imr);
• BlobCount = size(BBox,1);
• BBoxOffset = BBox + int32(repmat([roi(1) roi(2) 0 0],[BlobCount 1]));
• Imr = insertShape(Im,'Rectangle',BBoxOffset,'Color','green');
• % Display all the detected objects
• % insert number of all objects in the frame
• Imr = insertText(Imr, [1 1], OutCount);
• Imr = insertShape(Imr,'Rectangle',roi);
• %Imr = step(hDrawBBox, Imr, roi);
• step(hAllObjects, Imr);
• % Display the segmented video
• SegBBox = PtsOffset;
• SegBBox(1:BlobCount,:) = BBox;
• SegIm = insertShape(double(repmat(Segmented,[1 1 3])),'Rectangle', SegBBox,'Color','green');
• %SegIm = step(hDrawRectangles3, repmat(Segmented,[1 1 3]), SegBBox);
• step(hThresholdDisplay, SegIm);
• end
release(hVideoSrc);
• h=msgbox('The object has been detected!')
OUTPUT
Detection of Abandoned Bag
REFERENCES
• Research paper of Medha Bhargava, Chia-Chih Chen, M. S. Ryoo, and J. K. Aggarwal
from University of Austin.
• Research paper “Multiple Object Tracking” by C. Sears and Z. Pylyshyn
• Research paper by J. Martinez-del-Rincon, J. Elías Herrero, Jorge Jómez and Carlos
Orrite Uruñuela, “Automatic Left Luggage Detection and Tracking using Multi-Camera.
• Anoop Mathew.
THANK YOU
M A D E BY : A R U S H I C H A U D H R Y
A N D
S A U M YA T I WA R I

More Related Content

PPTX
ppt on image processing
PPTX
1intro 應用案例 中山大學
PPTX
Switches and LEDs interface to the 8051 microcontroller
PPTX
Object detection
PPTX
Serial communication
PPTX
Understanding neural radiance fields
PDF
Lec9: Medical Image Segmentation (III) (Fuzzy Connected Image Segmentation)
PPTX
Feed forward ,back propagation,gradient descent
ppt on image processing
1intro 應用案例 中山大學
Switches and LEDs interface to the 8051 microcontroller
Object detection
Serial communication
Understanding neural radiance fields
Lec9: Medical Image Segmentation (III) (Fuzzy Connected Image Segmentation)
Feed forward ,back propagation,gradient descent

What's hot (20)

PDF
Depth estimation do we need to throw old things away
PPTX
Chain Code.pptx
PDF
Iaetsd asynchronous data transactions on so c using fifo
PDF
2013-1 Machine Learning Lecture 01 - Pattern Recognition
PDF
Pose estimation from RGB images by deep learning
PDF
Deep Learning Hardware: Past, Present, & Future
PDF
Unit 1 Introduction to Embedded computing and ARM processor
PPTX
Darknet yolo
PPTX
Multiple Object Tracking
PPTX
Modulo n counter
PDF
tSAXS量測於半導體產業之應用
PPSX
Thesis, Image Registration Methods
PDF
Microprocessor lab
PDF
Human Activity Recognition
PPTX
Image Sampling and Quantization.pptx
PPTX
Human activity recognition
DOCX
Lcd interfacing
PPTX
Moving object detection
Depth estimation do we need to throw old things away
Chain Code.pptx
Iaetsd asynchronous data transactions on so c using fifo
2013-1 Machine Learning Lecture 01 - Pattern Recognition
Pose estimation from RGB images by deep learning
Deep Learning Hardware: Past, Present, & Future
Unit 1 Introduction to Embedded computing and ARM processor
Darknet yolo
Multiple Object Tracking
Modulo n counter
tSAXS量測於半導體產業之應用
Thesis, Image Registration Methods
Microprocessor lab
Human Activity Recognition
Image Sampling and Quantization.pptx
Human activity recognition
Lcd interfacing
Moving object detection
Ad

Similar to Detection of Abandoned Bag (20)

PDF
G04743943
PDF
Conference research paper_target_tracking
PDF
Occlusion and Abandoned Object Detection for Surveillance Applications
PDF
IRJET- Real Time Video Object Tracking using Motion Estimation
PDF
Instantaneous Object Detection By Blob Assessment
PDF
IRJET- Abandoned Object Detection System – A Review
PDF
00463517b1e90c1e63000000
PDF
License plate extraction of overspeeding vehicles
PDF
Abandoned Object Detection Based on Statistics for Labeled Regions
PDF
Kq3518291832
PDF
CVGIP 2010 Part 2
PDF
A Real-Time System for Monitoring of Cyclists and Pedestrians.pdf
PDF
PDF
MATLAB Code + Description : Real-Time Object Motion Detection and Tracking
PDF
PDF
The International Journal of Engineering and Science (The IJES)
PPTX
project_final_seminar
PDF
Real Time Object Identification for Intelligent Video Surveillance Applications
PDF
IRJET- Front View Identification of Vehicles by using Machine Learning Te...
PPTX
Object tracking
G04743943
Conference research paper_target_tracking
Occlusion and Abandoned Object Detection for Surveillance Applications
IRJET- Real Time Video Object Tracking using Motion Estimation
Instantaneous Object Detection By Blob Assessment
IRJET- Abandoned Object Detection System – A Review
00463517b1e90c1e63000000
License plate extraction of overspeeding vehicles
Abandoned Object Detection Based on Statistics for Labeled Regions
Kq3518291832
CVGIP 2010 Part 2
A Real-Time System for Monitoring of Cyclists and Pedestrians.pdf
MATLAB Code + Description : Real-Time Object Motion Detection and Tracking
The International Journal of Engineering and Science (The IJES)
project_final_seminar
Real Time Object Identification for Intelligent Video Surveillance Applications
IRJET- Front View Identification of Vehicles by using Machine Learning Te...
Object tracking
Ad

More from Saumya Tiwari (11)

PDF
APPLICATION OF SIGNAL PROCESSING IN RADIO ASTRONOMY SYSTEMS
PDF
Voice Control Home Automation
DOCX
Reflection, Scaling, Shear, Translation, and Rotation
PDF
Case Study: LED
PDF
IoT Based Fire Alarm and Monitoring System
PPTX
Anti bag snatching alarm
PDF
Cloning
PPTX
Isotherm project
DOCX
Queue less report
DOCX
Climbing Cleaning Robot
PDF
Modes of entry
APPLICATION OF SIGNAL PROCESSING IN RADIO ASTRONOMY SYSTEMS
Voice Control Home Automation
Reflection, Scaling, Shear, Translation, and Rotation
Case Study: LED
IoT Based Fire Alarm and Monitoring System
Anti bag snatching alarm
Cloning
Isotherm project
Queue less report
Climbing Cleaning Robot
Modes of entry

Recently uploaded (20)

DOCX
Modern SharePoint Intranet Templates That Boost Employee Engagement in 2025.docx
PPTX
Lecture 5 Software Requirement Engineering
PDF
Guide to Food Delivery App Development.pdf
PDF
Type Class Derivation in Scala 3 - Jose Luis Pintado Barbero
PDF
DNT Brochure 2025 – ISV Solutions @ D365
DOCX
How to Use SharePoint as an ISO-Compliant Document Management System
PPTX
GSA Content Generator Crack (2025 Latest)
PDF
Ableton Live Suite for MacOS Crack Full Download (Latest 2025)
PPTX
Tech Workshop Escape Room Tech Workshop
DOC
UTEP毕业证学历认证,宾夕法尼亚克拉里恩大学毕业证未毕业
PDF
AI Guide for Business Growth - Arna Softech
PPTX
Cybersecurity: Protecting the Digital World
PDF
DuckDuckGo Private Browser Premium APK for Android Crack Latest 2025
PPTX
most interesting chapter in the world ppt
PDF
CCleaner 6.39.11548 Crack 2025 License Key
PDF
Visual explanation of Dijkstra's Algorithm using Python
PDF
AI-Powered Threat Modeling: The Future of Cybersecurity by Arun Kumar Elengov...
PDF
How Tridens DevSecOps Ensures Compliance, Security, and Agility
PPTX
Full-Stack Developer Courses That Actually Land You Jobs
PPTX
Matchmaking for JVMs: How to Pick the Perfect GC Partner
Modern SharePoint Intranet Templates That Boost Employee Engagement in 2025.docx
Lecture 5 Software Requirement Engineering
Guide to Food Delivery App Development.pdf
Type Class Derivation in Scala 3 - Jose Luis Pintado Barbero
DNT Brochure 2025 – ISV Solutions @ D365
How to Use SharePoint as an ISO-Compliant Document Management System
GSA Content Generator Crack (2025 Latest)
Ableton Live Suite for MacOS Crack Full Download (Latest 2025)
Tech Workshop Escape Room Tech Workshop
UTEP毕业证学历认证,宾夕法尼亚克拉里恩大学毕业证未毕业
AI Guide for Business Growth - Arna Softech
Cybersecurity: Protecting the Digital World
DuckDuckGo Private Browser Premium APK for Android Crack Latest 2025
most interesting chapter in the world ppt
CCleaner 6.39.11548 Crack 2025 License Key
Visual explanation of Dijkstra's Algorithm using Python
AI-Powered Threat Modeling: The Future of Cybersecurity by Arun Kumar Elengov...
How Tridens DevSecOps Ensures Compliance, Security, and Agility
Full-Stack Developer Courses That Actually Land You Jobs
Matchmaking for JVMs: How to Pick the Perfect GC Partner

Detection of Abandoned Bag

  • 1. DETECTION OF ABANDONED OBJECTS I N C R O W D E D E N V I R O N M E N T S
  • 2. INTRODUCTION • Visual surveillance systems today consist of a large number of cameras, usually monitored by a relatively small team of human operators. • Recent studies have shown that the average human can focus on tracking the movements of up to four dynamic targets simultaneously, and can efficiently detect changes to the attended targets but not the neighboring distractors. • When targets and distractors are too close, it becomes difficult to individuate the targets and maintain tracking efficiently. • Further, according to the classical spotlight theory of visual attention, people can attend to only one region of space (i.e. area in view) at a time, or at most, two. • Simply stated, the human visual processing capability and attentiveness required for the effective monitoring of crowded scenes or multiple screens within a surveillance system is limited.
  • 3. PROPOSED ALGORITHM 4 Sub-events Algorithm 1. Detection of unattended bag. 2. Reverse traversal through previous frames to discover the likely owner.
  • 4. COMPUTATIONAL MODULE I. Detection of Unattended Baggage • The goal of the first module of the algorithm is the detection of any stationary baggage. Until such an event occurs, it is unnecessary to track and monitor all ongoing activities in the scene. Doing so not only cuts computational costs but also avoids ambiguities born of inaccuracies in tracking in the presence of much movement and occlusion. • The representation of bags is established using typical shape and size characteristics. The classifier is trained off-line, using the following features: • Compactness – the ratio of area to squared perimeter (multiplied by 4π for normalization) • Solidity ratio – the extent to which the blob area covers the convex hull area • Eccentricity – the ratio of major axis to minor axis of an ellipse that envelopes the blob • To ensure that the bag remains stationary while left alone as well as to reinforce the decision of the classifier, each suspect blob is tracked over a number of consecutive frames (usually, around 10) to check for the consistency of detection and position, before declaring it as unattended and moving on to look for its potential owner(s).
  • 6. CURRENT APPROACH: BLOB ANALYSIS SYSTEM • Extract a region of interest (ROI), thus eliminating video areas that are unlikely to contain abandoned objects. • Perform video segmentation using background subtraction. • Track objects based on their area and centroid statistics. • Visualize the results.
  • 7. EXTRACT A REGION OF INTEREST (ROI) • It is defined as roi(x y width height)x y are image that has to be focused(portion of image that has to be performed)
  • 8. PERFORM VIDEO SEGMENTATION USING BACKGROUND SUBTRACTION • Create a Color Space Converter System object to convert the RGB image to Y'CbCr format. • Create Threshold scale factor. • Create a Morphological Close System object to fill in small gaps in the detected objects. Track objects based on their area and centroid statistics.
  • 11. TRACK OBJECTS BASED ON THEIR AREA AND CENTROID STATISTICS.
  • 13. CODES• roi = [100 80 360 240]; % defining region of interest roi(x y width height)x y are image that has to be focused(portion of image that has to be performed) • % Maximum number of objects to track • maxNumObj = 200; • % Number of frames that an object must remain stationary before an alarm is • % raised • alarmCount = 45; • % Maximum number of frames that an abandoned object can be hidden before it • % is no longer tracked • maxConsecutiveMiss = 4; • areaChangeFraction = 20; % Maximum allowable change in object area in percent • centroidChangeFraction = 30; % Maximum allowable change in object centroid in percent • % Minimum ratio between the number of frames in which an object is detected • % and the total number of frames, for that object to be tracked. • minPersistenceRatio = 0.3; • % Offsets for drawing bounding boxes in original input video • PtsOffset = int32(repmat([roi(1), roi(2), 0, 0],[maxNumObj 1]));%Convert to 32-bit signed integer, repmat: Repeat copies of a matrix • %% • % Create a VideoFileReader System object to read video from a file. • hVideoSrc = vision.VideoFileReader; • hVideoSrc.Filename = 'Abandoned_Bag1.mp4'; • hVideoSrc.VideoOutputDataType = 'single';
  • 14. • %% • % Create a ColorSpaceConverter System object to convert the RGB image to • % Y'CbCr format. • hColorConv = vision.ColorSpaceConverter('Conversion', 'RGB to YCbCr'); %Y?CbCr color space is a mathematical coordinate transformation from an associated RGB color space. • %% • % Create a ColorSpaceConverter System object to convert the RGB image to • % Y'CbCr format. • hAutothreshold = vision.Autothresholder('ThresholdScaleFactor', 1.3); • %% • % Create a MorphologicalClose System object to fill in small gaps in the detected objects. • hClosing = vision.MorphologicalClose('Neighborhood', strel('square',5)); • %% • % Create a BlobAnalysis System object to find the area, centroid, and bounding • % box of the objects in the video. • hBlob = vision.BlobAnalysis('MaximumCount', maxNumObj, 'ExcludeBorderBlobs', true); • hBlob.MinimumBlobArea = 100; • hBlob.MaximumBlobArea = 2500; • %% • % Create System objects to display results. • pos = [10 300 roi(3)+25 roi(4)+25]; • hAbandonedObjects = vision.VideoPlayer('Name', 'Abandoned Objects', 'Position', pos); • pos(1) = 46+roi(3); % move the next viewer to the right • hAllObjects = vision.VideoPlayer('Name', 'All Objects', 'Position', pos); • pos = [80+2*roi(3) 300 roi(3)-roi(1)+25 roi(4)-roi(2)+25]; • hThresholdDisplay = vision.VideoPlayer('Name', 'Threshold', 'Position', pos);
  • 15. • %% Video Processing Loop • % Create a processing loop to perform abandoned object detection on the input • % video. This loop uses the System objects you instantiated above. • firsttime = true; • while ~isDone(hVideoSrc) • Im = step(hVideoSrc); • • % Select the region of interest from the original video • OutIm = Im(roi(2):end, roi(1):end, :); • YCbCr = step(hColorConv, OutIm); • CbCr = complex(YCbCr(:,:,2), YCbCr(:,:,3)); • % Store the first video frame as the background • if firsttime • firsttime = false; • BkgY = YCbCr(:,:,1); • BkgCbCr = CbCr; • end • SegY = step(hAutothreshold, abs(YCbCr(:,:,1)-BkgY)); • SegCbCr = abs(CbCr-BkgCbCr) > 0.05;
  • 16. • • % Fill in small gaps in the detected objects • Segmented = step(hClosing, SegY | SegCbCr); • % Perform blob analysis • [Area, Centroid, BBox] = step(hBlob, Segmented); • % Call the helper function that tracks the identified objects and • % returns the bounding boxes and the number of the abandoned objects. • [OutCount, OutBBox] = videoobjtracker(Area, Centroid, BBox, maxNumObj, ... • areaChangeFraction, centroidChangeFraction, maxConsecutiveMiss, ... • minPersistenceRatio, alarmCount); • % Display the abandoned object detection results • Imr = insertShape(Im,'FilledRectangle',OutBBox+PtsOffset,... • 'Color','red','Opacity',0.5); • % Call the helper function that tracks the identified objects and • % returns the bounding boxes and the number of the abandoned objects. • [OutCount, OutBBox] = videoobjtracker(Area, Centroid, BBox, maxNumObj, ... • areaChangeFraction, centroidChangeFraction, maxConsecutiveMiss, ... • minPersistenceRatio, alarmCount);
  • 17. • % Display the abandoned object detection results • Imr = insertShape(Im,'FilledRectangle',OutBBox+PtsOffset,... • 'Color','red','Opacity',0.5); • % insert number of abandoned objects in the frame • Imr = insertText(Imr, [1 1], OutCount); • step(hAbandonedObjects, Imr); • BlobCount = size(BBox,1); • BBoxOffset = BBox + int32(repmat([roi(1) roi(2) 0 0],[BlobCount 1])); • Imr = insertShape(Im,'Rectangle',BBoxOffset,'Color','green'); • % Display all the detected objects • % insert number of all objects in the frame • Imr = insertText(Imr, [1 1], OutCount); • Imr = insertShape(Imr,'Rectangle',roi); • %Imr = step(hDrawBBox, Imr, roi); • step(hAllObjects, Imr); • % Display the segmented video • SegBBox = PtsOffset; • SegBBox(1:BlobCount,:) = BBox; • SegIm = insertShape(double(repmat(Segmented,[1 1 3])),'Rectangle', SegBBox,'Color','green'); • %SegIm = step(hDrawRectangles3, repmat(Segmented,[1 1 3]), SegBBox); • step(hThresholdDisplay, SegIm); • end release(hVideoSrc); • h=msgbox('The object has been detected!')
  • 20. REFERENCES • Research paper of Medha Bhargava, Chia-Chih Chen, M. S. Ryoo, and J. K. Aggarwal from University of Austin. • Research paper “Multiple Object Tracking” by C. Sears and Z. Pylyshyn • Research paper by J. Martinez-del-Rincon, J. Elías Herrero, Jorge Jómez and Carlos Orrite Uruñuela, “Automatic Left Luggage Detection and Tracking using Multi-Camera. • Anoop Mathew.
  • 21. THANK YOU M A D E BY : A R U S H I C H A U D H R Y A N D S A U M YA T I WA R I