SlideShare a Scribd company logo
Analysis of Error in Image Logging between
Subsequent Frames of a Streaming Video
Mr. THANMAY J.S
Assistant Professor, Mechanical Department, G.S.S.S.I.E.T.W Mysore.
Email: thanmay26@rediffmail.com
Abstract— In Image Analysis of a streaming video for certain
applications, it may not be necessary to log every frame provided
by an image acquisition device. In fact, it may be more practical
and resourceful to log frames at certain intervals. To log frames
at a certain interval acquisition device must be configured for
Frame Grab Interval, Trigger property, Frame Rate property,
umber of frames to be logged etc. Configuring the property to
an integer value specifies which frame should be logged.
Generally error inclusions due to delay in logging data is a
critical issue which is well understood in this report. Set of
experimental results with changes in Frame rate and umber of
Frames to be logged along with percentage errors are analyzed in
this report
Whenever captured image is consider for visual analysis
traveling on a fast-moving conveyor belt. The image capture
system must know when they are in the right place and capturing
the image and transmit the result for processing before the next
sample arrives. Selection of trigger sensors is no problem; the
next consideration is timing which can be a sporadic in nature
and depends upon logging data and error in acquisition. This
report explains the importance of adjusting the device parameters
like frame rate and grab interval at which frames were logged
and how number of frames logged per second with error in
logging frames.
Keywords: Image Acquisition, Video Device setting, Accessing
Device Properties, Frame Rate, Grab Interval, Logging Data,
Time difference between frames etc.

I. I

objects that represent the connection to the device. This kind
of Image Acquisition system is well suitable for Artificial
intelligence and Machine vision system in industries. image
acquisition application involves these major steps:
Step 1) Starting the video input object -- You start an object by
calling the start function. Starting an object prepares the object for
data acquisition. For example, starting an object locks the values
of certain object properties (they become read only). Starting
an object does not initiate the acquiring of image frames,
however. The initiation of data logging depends on the execution
of a trigger. The example calls the start function to start the
video input object. Objects stop when they have acquired the
requested number of frames.
Step 2) Triggering the acquisition -- To acquire data, a video
input object must execute a trigger. Triggers can occur in
several ways, depending on how the Trigger Type property is
configured. For example, if you specify an immediate trigger,
the object executes a trigger automatically, immediately after it
starts. If you specify a manual trigger, the object waits for a
call to the trigger function before it initiates data acquisition.
Step 3) Bringing data into the workspace -- The Image analysis
software stores acquired data in a memory buffer, a disk file,
or both, depending on the value of the video input object
Logging Mode property. To work with this data, you must
bring it into the workspace. Once the data is in the workspace,
you can manipulate it as you would any other data.

TRODUCTIO TO IMAGE ACQUISITIO

This document provides an information for the Acquiring
of Image Data after you create the video input object and
configure its properties. The Image Acquisition system enables
you to connect to an image acquisition device from within a
software interface session and based on object technology, the
Image Acquisition system provides functions for creating

Mr. THA MAY J.S, Assistant Professor in Mechanical Department, G.S.S.S
Institute of Engineering and Technology for Women, K.R.S. Road,
Metagalli, Mysore-570016, Karnataka, India.
(E-mail of corresponding author: thanmay26@rediffmail.com).

II. BASIC IMAGE ACQUISITIO PROCEDURE
This section illustrates the basic steps required to create an
image acquisition application by implementing a simple
motion detection application. The application detects
movement in a scene by performing a pixel-to-pixel
comparison in pairs of incoming image frames. If nothing
moves in the scene, pixel values remain the same in each
frame. When something moves in the image, the application
displays the pixels that have changed values.
To use the Image Acquisition system to acquire image data,
you must perform the following basic steps:
Step 1: Install and configure your image acquisition device
Step 2: Configure image acquisition properties
Step 3: Create a video input object
Step 4: Preview the video stream
Step 5: Acquire image data
Step 6: doing Image analysis and displaying results
Step 7: Cleaning up the memory
Certain steps are optional and this kind of system to acquire
streaming images is well suitable for Machine vision system.
III.

IMAGE ACQUISITIO PROPERTIES

1) The frame rate describes how fast an image acquisition
device provides data, typically measured as frames per second
(Figure o 1). Devices that support industry-standard video
formats must provide frames at the rate specified by the
standard. For RS170 and TSC, the standard dictates a frame
rate of 30 frames per second (30 Hz). The CCIR and PAL
standards define a frame rate of 25 Hz. on-standard devices
can be configured to operate at higher rates. Generic Windows
image acquisition devices, such as Webcams, might support
many different frame rates. Depending on the device being
used, the frame rate might be configurable using a devicespecific property of the image acquisition object. The rate at
which the Image Acquisition software can process images
depends on the processor speed, the complexity of the
processing algorithm, and the frame rate. Given a fast
processor, a simple algorithm, and a frame rate tuned to the
acquisition setup, the Image Acquisition software can process
data as it comes in.
2) The Frame Grab Interval property specifies how often
the video input object acquires a frame from the video stream
(Figure o 1). By default, objects acquire every frame in the
video stream, but you can use this property to specify other
acquisition intervals. For example, when you specify a Frame
Grab Interval value of 3, the object acquires every third frame
from the video stream, as illustrated in this figure. The object
acquires the first frame in the video stream before applying the
Frame Grab Interval.
3) There are two methods to use Trigger Properties one is
Automatic and other is Manual (Figure o 1). To use an
Automatic immediate trigger, simply create a video input
object. Immediate triggering is the default trigger type for all
video input objects. With an immediate trigger, the object
executes the trigger immediately after you start the object
running with the start command. To use a manual trigger,
create a video input object and set the value of the Trigger
Type property to 'manual'. A video input object executes a
manual trigger after you issue the trigger function.
4) Frames per Trigger are used to move multiple frames of
data from the memory buffer into the workspace (Figure o 1).
By default, getting data retrieves the number of frames
specified in the Frames per Trigger property but you can
specify any number. In the Image Acquisition system, one can
specify the amount of data intended to be acquired as the
number of frames per trigger. You specify the desired size of
your acquisition as the value of the video input object Frames
per Trigger property.

Figure o 2. Machine Vision application using video stream.

The question arises is “why to understand all these things?” the
reason is “Timing”. Observe the above figure (Figure o 2)
where moving items are constantly monitored by Machine
vision. If logging time and process does not match each other
then caps of a bottle will be places on label and label will be
stuck on caps. Is this possible? “YES”; for this I give you a
following popular experiment.
IV.

U DERSTA DI G PROPERTIES OF IMAGE ACQUISITIO

Experiment umber: 01
The following experiment is done by lighting a match stick in
front of a web cam with 30 frames per second (Figure o 3). This
gives an idea how well the image is acquired. 30 frames per
second means 1 frame is 0.033 second. If I calculate
number of frames then I should get exact timing of a match
stick burning rate. Let us see how?

Figure o 3. Video stream of a Match stick burnt with 30 fps and
number of frames logged is 400.

Please observe the above figure (Figure o 3); 30 fps means
0.033 seconds per frame then the burning of match stick is 146
frames. This gives us the Idea 146/30 is the time required for a
match stick to burn that means 4.866 seconds?
Experiment umber: 02
The following experiment is done by lighting a match stick in
front of a web cam with 30 frames per second (Figure o 4) and
number of frames logged is 300. it appears 155 frames shows
burning rate then 155/30 should be actual burning rate of match
stick which is 5.166 seconds?.

Figure o 4. Video stream of a Match stick burnt with 30 fps and
number of frames logged is 300.
Figure o 1. Image Acquisition properties in a video stream.

ote: Every match stick can burn up to 25 seconds.
V.

REASO S FOR PROPERTIES OF IMAGE ACQUISITIO TO
DISOBEY

When ever there is a queue of data trying to enter the
processor of a system they hung up due to low in processing
speed. This gives rise to an error which is known as “logging
error between two frames”.
One frames which waits to be logged on delays another
frame to wait until it is processed. Once it is processed the
waiting frame does not counts as the video stream is a dynamic
system where data I logged on the basis of number of frame to be
logged. For a perfect flow of data to log and to process the
system understands few rules they are as follows.

Following set of experiments is as shown with their
corresponding results in Tabular forms. Variations to all needs
are specifically noted and the results are tabulated.
VI.

LOGGI G OF TWO FRAMES

The following three tables shows the set of results due to
variation in either frame rate per second or variation in number of
frames to be logged or constants in them.
TABLE I
EXPIRIME T O: 01
To study variations in frame rate per second and corresponding variations in
number of frames to be logged.
Experiment
Frame
umber
Logging
Average
Percent
o

Rule umber: 01
Logging Rate = 1/logged Per Sec
i.e. 30 fps = 1/30 log per sec i.e. 0.033 sec per frame
Rule umber: 02

Average difference between each frame =

EXPIRIME T CASE STUDY FOR ERROR A ALYSIS BETWEE

Rate
Log per
second

of
frames
logged

rate

Diff

Error

1

30

300

0.03

0.1474

11.4098

2
3

25
20

250
200

0.04
0.05

0.1473
0.1458

10.7297
9.5844

4
5
6

15
10
5

first (n-1) frame ~ (n) second frame
Rule umber: 03
Percentage Error = logging Rate-Average
Difference * 100
If these errors are not noted then the results are those which
you have just seen. ow let us solve those two experiments.
Experiment umber: 01
umber of frames per second = 30 fps
umbers of frames to be logged = 400 f
Time based acquired frames = 146 f
Average difference between two frames= 0.1456 sec
Error between two frames (Average) = 11.2231
Time for match stick burnt (Approximate) = 20sec sec
Then the exact time should be
146*0.112231+4.86= 21.245726 Sec
[1]

[2] Experiment umber: 02
umber of frames per second = 30 fps
umbers of frames to be logged = 300 f
Time based acquired frames = 155 f
Average difference between two frames= 0.1464 sec
Error between two frames (Average) = 11.3078
Time for match stick burnt (Approximate) = 23sec sec
Then the exact time should be
155*0.113078+5.16= 22.68709 Sec
ow seeing this result which shows up to nano decimal is
astounding. These results can bring forth the Machine Vision
system to its timing but the conclusion cannot be easily done.
What if number of frames to be logged is 5 to 5000 or what if
number of frames per second is 5 to 30? These kinds of
questions need specific answers. For this several experiments
are done to know the suitable answer. The variation in frames
per second and variation in frames to be logged are constantly
varied and errors in logging are studied. This showed many
important factors which cannot be explained in singular terms.

150
0.06
0.1463
7.9682
100
0.1
0.1422
4.2212
50
0.2
0.1341
6.5878
TABLE II
EXPIRIME T O: 02
To study constant frame rate per second and variation in number of frames to
be logged.
Experiment
Frame
umber
Logging
Average
Percent Error
o
Rate
of
rate
Diff
Log
frames
per
logged
second
1
30
300
0.033
0.1474
11.4098
2
3
4
5
6

30
30
30
30
30

250
200
150
100
50

0.033
0.1472
11.3888
0.033
0.1469
11.3541
0.033
0.1454
11.2056
0.033
0.1438
11.0495
0.033
0.1348
10.1422
TABLE III
EXPIRIME T O: 03
To study variations in frame rate per second and constant number of frames to
be logged.
Experiment
Frame
umber
Logging
Average
Percent Error
o
Rate
of
rate
Diff
Log
frames
per
logged
second
1
30
100
0.03
0.1428
10.9444
2
25
100
0.04
0.1416
10.1646
3
20
100
0.05
0.1412
9.1232
4
15
100
0.06
0.1414
7.4768
5
10
100
0.1
0.1357
3.5737
6
5
100
0.2
0.1413
5.8747

Observe in each table the error percentage shown. What
variation can anyone observe in them unless they are shown as
in umerical values? Remember timing is a question to be
answered. In each of the above three table logging rate is not
our problem this is because intense image has larger logging
rate as every thigh to be logged is in the rate of pixels. Then
the error registered is our main picture. Averaging the logging
rate revokes all our problems and to obtain perfect timing error
percentage is calculated. Machine Vision, Computer Vision or
Image Analysis of any Video stream requires logging of data if the
data has error then the whole Analysis goes wrong.
VII.

CO CLUSIO

EXPIRIME T

O: 03

To study variations in frame rate per second and constant number of frames to
be logged.

120

How can any one believe a match stick can burn only for 3 or
5 second? This shows even video stream registration need
processing speed and this has to be concluded in standards.
The above Tabulation results are now shown of graphical
forms to conclude. Observe the below three bar chart which
clarify all our doubt regarding image acquisition and its error
incurred.

100
80
Frame Per Second
60

Percentage Error
Number of frames logged

40
20

EXPIRIME T O: 01
To study variations in frame rate per second and corresponding variations in
number of frames to be logged.

350

0
1

2

3

4

5

6

EXPIRIME T O: 01 CO CLUSIO

300

Higher frame rate per second and larger frames to be logged
always shows higher error. Decreasing both simultaneously
reduces error progressively.

250
200

Frames per second
Percentage Error
Number of frames logged

150

EXPIRIME T O: 02 CO CLUSIO

Higher frame rate per second and higher frames to be logged
always shows higher error. Decreasing only frame to be logged
and making frames per second constant does not reduce error
much.

100
50
0

EXPIRIME T O: 03 CO CLUSIO
1

2

3

4

5

6

Variation in frame rate per second and frames to be logged
remains constant shows error reduction dramatically.
EXPIRIME T O: 02
To study constant frame rate per second and variation in number of frames to
be logged.

350
300
250
200

Frames per second
Percentage Error
Number of frames logged

150
100
50
0
1

2

3

4

5

6

Final conclusion makes one and all to remember, that
acquiring image data from an video stream does not gives
accurate timing unless error between logging of two
subsequent frames are described. Increasing or decreasing
frame rate per second or number of frames to be logged cannot
be ignored.
Machine Vision attached with Robotics should be
able to detect and understand the multitude of tasks that can
arise in “hard engineering” in industries, where low-tolerance
features giving rise to highly variable images to be processed
along with timing of work handling. What if the timing goes
wrong then the learning ability of artificial intelligence will
start to learn the trial and error method which humans have
mastered it. The culture of developing Machine Vision or the
Artificial Intelligence should not be blinded with error in
Image acquisition and error in processing logged data. If
logging data depends on processor speed then perfection in
analysis of a Image from a streams of data has to answer
average error in processing them, this will lead to a perfection.
VIII. REFERE

CES

References are gathered through primary data of literature
survey through internet sites like http://citeseerx.ist.psu.edu
and by interacting with experts. References and readily
available publications are used only for knowledge purpose
but the experiments and proof for the report are self build.
Samples of the collective formats for various types of
references are given below.
[1] A. Said, W. A. Pearlman, “A ew Fast and Efficient Image Codec
Based on Set Partitioning in Hierarchical Trees”, IEEE Trans. Circuits
and Syst. Video Technol., 6(3), 243-250 (1996).
[2] P. Corriveau, A. Webster, "VQEG Evaluation of Objective Methods of
Video Quality Assessment", SMPTE Journal, 108, 645-648, 1999.
[3] A.M. Rohaly, P. Corriveau, J. Libert, A. Webster, V. Baroncini, J.
Beerends, J.L Blin, L. Contin, T. Hamada, D. Harrison, A. Hekstra, J.
Lubin, Y. ishida, R. ishihara, J. Pearson, A. F. Pessoa, . Pickford,
A. Schertz, M. Visca, A. B. Watson, S. Winkler: "Video Quality Experts
Group: Current results and future directions." Proc. SPIE Visual
Communications and Image Processing, vol. 4067, Perth, Australia,
June 21-23, 2000.
[4] C. B. Lambrecht, Ed., “Special Issue on Image and Video Quality
Metrics”, Signal Processing, vol. 70, (1998).
[5] A. Said and W. A. Pearlman, “A new, fast, and efficient image codec
based on set partitioning in hierarchical trees,” IEEE Trans. Circuits
Syst. Video Technol., vol. 6, pp. 243-250, June 1993.
[6] “Jpeg software codec.” Portable Research Video Group, Stanford
University, 1997. Available via anonymous ftp from
havefun.stanford.edu:pub/jpeg/JPEGv1.1.tar.Z.
[7]
J. Ashley, R. Barber, M. Flickner, J. Hafner, D. Lee, W. iblack, and
D. Petkovic. Automatic and semi-automatic methods for image
annotation and retrieval in QBIC. In W. iblack and R. C. Jain, editors,
Storage and Retrieval for Image and Video Databases III. SPIE - The
International Society for Optical Engineering, 1995
[8] A. Rav-Acha and S. Peleg. Restoration of multiple images with motion
blur in different directions. IEEE Workshop on Applications of
Computer Vision, 2000.
[9] K. Cinkler and A. Mertins, Coding of digital video with the edgesensitive discrete wavelet transform," in IEEE International
Conference on Image Processing, vol. 1, pp. 961-964, 1996.
[10] J. Bach, C. Fuler, A. Gupta, A. Hampapur, B. Horowitz, R. Humphrey,
R. Jain, and C. Shu, “The Virage Image Search Engine: An Open
Framework for Image Management,” Proc. SPIE Conf. Storage and
Retrieval for Image and Video Databases IV, vol. 2670,
[11] pp. 76-87, 1996. R. J. Vidmar. (1992, Aug.). On the use of atmospheric
plasmas as electromagnetic reflectors. IEEE Trans. Plasma Sci.
[Online]. 21(3), pp. 876-880. Available:
http://www.halcyon.com/pub/journals/21ps03-vidmar

More Related Content

PDF
HMotion: An Algorithm for Human Motion Detection
PDF
IRJET-Cleaner Drone
PDF
IRJET-Real-Time Object Detection: A Survey
PPTX
Smart home security using Telegram chatbot
PDF
Human Motion Detection in Video Surveillance using Computer Vision Technique
PDF
Development of Human Tracking in Video Surveillance System for Activity Anal...
PDF
IRJET- Full Body Motion Detection and Surveillance System Application
PDF
K026063069
HMotion: An Algorithm for Human Motion Detection
IRJET-Cleaner Drone
IRJET-Real-Time Object Detection: A Survey
Smart home security using Telegram chatbot
Human Motion Detection in Video Surveillance using Computer Vision Technique
Development of Human Tracking in Video Surveillance System for Activity Anal...
IRJET- Full Body Motion Detection and Surveillance System Application
K026063069

What's hot (7)

PDF
IRJET- A Review Analysis to Detect an Object in Video Surveillance System
PPTX
Moving object detection
PDF
Visual pattern recognition in robotics
PDF
Secure Image Transfer in The Domain Transform DFT
PDF
IRJET- Real Time Video Object Tracking using Motion Estimation
PDF
Background differencing algorithm for moving object detection using system ge...
PDF
IRJET - Steering Wheel Angle Prediction for Self-Driving Cars
IRJET- A Review Analysis to Detect an Object in Video Surveillance System
Moving object detection
Visual pattern recognition in robotics
Secure Image Transfer in The Domain Transform DFT
IRJET- Real Time Video Object Tracking using Motion Estimation
Background differencing algorithm for moving object detection using system ge...
IRJET - Steering Wheel Angle Prediction for Self-Driving Cars
Ad

Viewers also liked (10)

PPTX
Presentation1
PDF
Global Sustainability Jam Hong Kong 2013
PDF
บทที่ 7
PPTX
Kejriwal
PDF
Elcometer 107 Cross Hatch Cutter for adhesion tests provides an instant asses...
PDF
Thanco_Brochure
PDF
Dinner with Peers
PDF
Elcometer 456 Dry Film Coating Thickness Gauge sets new standards making meas...
PPT
Applications of cleanrooms in various industry
Presentation1
Global Sustainability Jam Hong Kong 2013
บทที่ 7
Kejriwal
Elcometer 107 Cross Hatch Cutter for adhesion tests provides an instant asses...
Thanco_Brochure
Dinner with Peers
Elcometer 456 Dry Film Coating Thickness Gauge sets new standards making meas...
Applications of cleanrooms in various industry
Ad

Similar to Analysis of error in image logging between subsequent frames of a streaming video (20)

PDF
Automated Security Surveillance System in Real Time World
PDF
C1 mala1 akila
PDF
PDF
object ttacking real time embdded ystem using imag processing
PDF
Recognition and tracking moving objects using moving camera in complex scenes
PDF
Real-Time Video Copy Detection in Big Data
DOCX
Multimodel Operation for Visually1.docx
PDF
IRJET- Storage Optimization of Video Surveillance from CCTV Camera
PDF
IRJET- Moving Object Detection with Shadow Compression using Foreground Segme...
PDF
Gi3511181122
PDF
Recent advances in content based video copy detection (IEEE)
PDF
Ijetr011814
PDF
VIDEO SUMMARIZATION: CORRELATION FOR SUMMARIZATION AND SUBTRACTION FOR RARE E...
PDF
Graphical Password by Image Segmentation
PDF
Video Stabilization using Python and open CV
PDF
Effective Compression of Digital Video
PDF
IRJET-Feature Extraction from Video Data for Indexing and Retrieval
PDF
IRJET- Object Detection using Machine Learning Technique
PDF
IRJET- A Shoulder-Surfing Resistant Graphical Password System
PDF
IRJET- Implementation of Privacy Preserving Content based Image Retrieval in ...
Automated Security Surveillance System in Real Time World
C1 mala1 akila
object ttacking real time embdded ystem using imag processing
Recognition and tracking moving objects using moving camera in complex scenes
Real-Time Video Copy Detection in Big Data
Multimodel Operation for Visually1.docx
IRJET- Storage Optimization of Video Surveillance from CCTV Camera
IRJET- Moving Object Detection with Shadow Compression using Foreground Segme...
Gi3511181122
Recent advances in content based video copy detection (IEEE)
Ijetr011814
VIDEO SUMMARIZATION: CORRELATION FOR SUMMARIZATION AND SUBTRACTION FOR RARE E...
Graphical Password by Image Segmentation
Video Stabilization using Python and open CV
Effective Compression of Digital Video
IRJET-Feature Extraction from Video Data for Indexing and Retrieval
IRJET- Object Detection using Machine Learning Technique
IRJET- A Shoulder-Surfing Resistant Graphical Password System
IRJET- Implementation of Privacy Preserving Content based Image Retrieval in ...

More from THANMAY JS (20)

PDF
Pattern Recognition 21BR551 MODULE 04 NOTES.pdf
PDF
Pattern Recognition 21BR551 MODULE 03 NOTES.pdf
PDF
Pattern Recognition 21BR551 MODULE 02 NOTES.pdf
PDF
Pattern Recognition 21BR551 MODULE 05 NOTES.pdf
PDF
Pattern Recognition 21BR551 MODULE 01 NOTES.pdf
PDF
Bio MEMS 21BR741 MODULE 02 NOTES MUSE Mys
PDF
Bio MEMS 21BR741 MODULE 04 NOTES MUSE Mys
PDF
Bio MEMS 21BR741 MODULE 05 NOTES MUSE Mys
PDF
Bio MEMS 21BR741 MODULE 03 NOTES MUSE Mys
PDF
Bio MEMS 21BR741 MODULE 01 NOTES MUSE Mys
PDF
Higher Education Entrance Exam Training Workbook.pdf
PDF
A complete guide for Placement Trainings
PDF
Python for System Programming Lab Manual (21BRL67)
PDF
21EME15 MODULE 05 NOTES, Automation, Engineering Materials
PDF
21EME15 MODULE 04 NOTES, Machine Tools, Metal Joining
PDF
21EME15 MODULE 03 NOTES, Refrigeration Air conditioning, Power Transmission
PDF
21EME15 MODULE 02 NOTES, Steam Turbines,
PDF
21BR15/25 Module 01 Notes, Energy Sources, Formation of Steam
PDF
Computer Hardware Maintenance and Administration Portfolio.pdf
PDF
Fundamentals of Automation Technology 20EE43P Portfolio.pdf
Pattern Recognition 21BR551 MODULE 04 NOTES.pdf
Pattern Recognition 21BR551 MODULE 03 NOTES.pdf
Pattern Recognition 21BR551 MODULE 02 NOTES.pdf
Pattern Recognition 21BR551 MODULE 05 NOTES.pdf
Pattern Recognition 21BR551 MODULE 01 NOTES.pdf
Bio MEMS 21BR741 MODULE 02 NOTES MUSE Mys
Bio MEMS 21BR741 MODULE 04 NOTES MUSE Mys
Bio MEMS 21BR741 MODULE 05 NOTES MUSE Mys
Bio MEMS 21BR741 MODULE 03 NOTES MUSE Mys
Bio MEMS 21BR741 MODULE 01 NOTES MUSE Mys
Higher Education Entrance Exam Training Workbook.pdf
A complete guide for Placement Trainings
Python for System Programming Lab Manual (21BRL67)
21EME15 MODULE 05 NOTES, Automation, Engineering Materials
21EME15 MODULE 04 NOTES, Machine Tools, Metal Joining
21EME15 MODULE 03 NOTES, Refrigeration Air conditioning, Power Transmission
21EME15 MODULE 02 NOTES, Steam Turbines,
21BR15/25 Module 01 Notes, Energy Sources, Formation of Steam
Computer Hardware Maintenance and Administration Portfolio.pdf
Fundamentals of Automation Technology 20EE43P Portfolio.pdf

Recently uploaded (20)

PPT
Teaching material agriculture food technology
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PDF
Empathic Computing: Creating Shared Understanding
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PPTX
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
PDF
Review of recent advances in non-invasive hemoglobin estimation
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PDF
Electronic commerce courselecture one. Pdf
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
Unlocking AI with Model Context Protocol (MCP)
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PPTX
Big Data Technologies - Introduction.pptx
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
DOCX
The AUB Centre for AI in Media Proposal.docx
PDF
Encapsulation theory and applications.pdf
Teaching material agriculture food technology
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
Understanding_Digital_Forensics_Presentation.pptx
Empathic Computing: Creating Shared Understanding
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
Review of recent advances in non-invasive hemoglobin estimation
Encapsulation_ Review paper, used for researhc scholars
Chapter 3 Spatial Domain Image Processing.pdf
Electronic commerce courselecture one. Pdf
Dropbox Q2 2025 Financial Results & Investor Presentation
Unlocking AI with Model Context Protocol (MCP)
20250228 LYD VKU AI Blended-Learning.pptx
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Big Data Technologies - Introduction.pptx
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
The AUB Centre for AI in Media Proposal.docx
Encapsulation theory and applications.pdf

Analysis of error in image logging between subsequent frames of a streaming video

  • 1. Analysis of Error in Image Logging between Subsequent Frames of a Streaming Video Mr. THANMAY J.S Assistant Professor, Mechanical Department, G.S.S.S.I.E.T.W Mysore. Email: thanmay26@rediffmail.com Abstract— In Image Analysis of a streaming video for certain applications, it may not be necessary to log every frame provided by an image acquisition device. In fact, it may be more practical and resourceful to log frames at certain intervals. To log frames at a certain interval acquisition device must be configured for Frame Grab Interval, Trigger property, Frame Rate property, umber of frames to be logged etc. Configuring the property to an integer value specifies which frame should be logged. Generally error inclusions due to delay in logging data is a critical issue which is well understood in this report. Set of experimental results with changes in Frame rate and umber of Frames to be logged along with percentage errors are analyzed in this report Whenever captured image is consider for visual analysis traveling on a fast-moving conveyor belt. The image capture system must know when they are in the right place and capturing the image and transmit the result for processing before the next sample arrives. Selection of trigger sensors is no problem; the next consideration is timing which can be a sporadic in nature and depends upon logging data and error in acquisition. This report explains the importance of adjusting the device parameters like frame rate and grab interval at which frames were logged and how number of frames logged per second with error in logging frames. Keywords: Image Acquisition, Video Device setting, Accessing Device Properties, Frame Rate, Grab Interval, Logging Data, Time difference between frames etc. I. I objects that represent the connection to the device. This kind of Image Acquisition system is well suitable for Artificial intelligence and Machine vision system in industries. image acquisition application involves these major steps: Step 1) Starting the video input object -- You start an object by calling the start function. Starting an object prepares the object for data acquisition. For example, starting an object locks the values of certain object properties (they become read only). Starting an object does not initiate the acquiring of image frames, however. The initiation of data logging depends on the execution of a trigger. The example calls the start function to start the video input object. Objects stop when they have acquired the requested number of frames. Step 2) Triggering the acquisition -- To acquire data, a video input object must execute a trigger. Triggers can occur in several ways, depending on how the Trigger Type property is configured. For example, if you specify an immediate trigger, the object executes a trigger automatically, immediately after it starts. If you specify a manual trigger, the object waits for a call to the trigger function before it initiates data acquisition. Step 3) Bringing data into the workspace -- The Image analysis software stores acquired data in a memory buffer, a disk file, or both, depending on the value of the video input object Logging Mode property. To work with this data, you must bring it into the workspace. Once the data is in the workspace, you can manipulate it as you would any other data. TRODUCTIO TO IMAGE ACQUISITIO This document provides an information for the Acquiring of Image Data after you create the video input object and configure its properties. The Image Acquisition system enables you to connect to an image acquisition device from within a software interface session and based on object technology, the Image Acquisition system provides functions for creating Mr. THA MAY J.S, Assistant Professor in Mechanical Department, G.S.S.S Institute of Engineering and Technology for Women, K.R.S. Road, Metagalli, Mysore-570016, Karnataka, India. (E-mail of corresponding author: thanmay26@rediffmail.com). II. BASIC IMAGE ACQUISITIO PROCEDURE This section illustrates the basic steps required to create an image acquisition application by implementing a simple motion detection application. The application detects movement in a scene by performing a pixel-to-pixel comparison in pairs of incoming image frames. If nothing moves in the scene, pixel values remain the same in each frame. When something moves in the image, the application displays the pixels that have changed values. To use the Image Acquisition system to acquire image data, you must perform the following basic steps: Step 1: Install and configure your image acquisition device Step 2: Configure image acquisition properties Step 3: Create a video input object Step 4: Preview the video stream Step 5: Acquire image data Step 6: doing Image analysis and displaying results Step 7: Cleaning up the memory Certain steps are optional and this kind of system to acquire streaming images is well suitable for Machine vision system.
  • 2. III. IMAGE ACQUISITIO PROPERTIES 1) The frame rate describes how fast an image acquisition device provides data, typically measured as frames per second (Figure o 1). Devices that support industry-standard video formats must provide frames at the rate specified by the standard. For RS170 and TSC, the standard dictates a frame rate of 30 frames per second (30 Hz). The CCIR and PAL standards define a frame rate of 25 Hz. on-standard devices can be configured to operate at higher rates. Generic Windows image acquisition devices, such as Webcams, might support many different frame rates. Depending on the device being used, the frame rate might be configurable using a devicespecific property of the image acquisition object. The rate at which the Image Acquisition software can process images depends on the processor speed, the complexity of the processing algorithm, and the frame rate. Given a fast processor, a simple algorithm, and a frame rate tuned to the acquisition setup, the Image Acquisition software can process data as it comes in. 2) The Frame Grab Interval property specifies how often the video input object acquires a frame from the video stream (Figure o 1). By default, objects acquire every frame in the video stream, but you can use this property to specify other acquisition intervals. For example, when you specify a Frame Grab Interval value of 3, the object acquires every third frame from the video stream, as illustrated in this figure. The object acquires the first frame in the video stream before applying the Frame Grab Interval. 3) There are two methods to use Trigger Properties one is Automatic and other is Manual (Figure o 1). To use an Automatic immediate trigger, simply create a video input object. Immediate triggering is the default trigger type for all video input objects. With an immediate trigger, the object executes the trigger immediately after you start the object running with the start command. To use a manual trigger, create a video input object and set the value of the Trigger Type property to 'manual'. A video input object executes a manual trigger after you issue the trigger function. 4) Frames per Trigger are used to move multiple frames of data from the memory buffer into the workspace (Figure o 1). By default, getting data retrieves the number of frames specified in the Frames per Trigger property but you can specify any number. In the Image Acquisition system, one can specify the amount of data intended to be acquired as the number of frames per trigger. You specify the desired size of your acquisition as the value of the video input object Frames per Trigger property. Figure o 2. Machine Vision application using video stream. The question arises is “why to understand all these things?” the reason is “Timing”. Observe the above figure (Figure o 2) where moving items are constantly monitored by Machine vision. If logging time and process does not match each other then caps of a bottle will be places on label and label will be stuck on caps. Is this possible? “YES”; for this I give you a following popular experiment. IV. U DERSTA DI G PROPERTIES OF IMAGE ACQUISITIO Experiment umber: 01 The following experiment is done by lighting a match stick in front of a web cam with 30 frames per second (Figure o 3). This gives an idea how well the image is acquired. 30 frames per second means 1 frame is 0.033 second. If I calculate number of frames then I should get exact timing of a match stick burning rate. Let us see how? Figure o 3. Video stream of a Match stick burnt with 30 fps and number of frames logged is 400. Please observe the above figure (Figure o 3); 30 fps means 0.033 seconds per frame then the burning of match stick is 146 frames. This gives us the Idea 146/30 is the time required for a match stick to burn that means 4.866 seconds? Experiment umber: 02 The following experiment is done by lighting a match stick in front of a web cam with 30 frames per second (Figure o 4) and number of frames logged is 300. it appears 155 frames shows burning rate then 155/30 should be actual burning rate of match stick which is 5.166 seconds?. Figure o 4. Video stream of a Match stick burnt with 30 fps and number of frames logged is 300. Figure o 1. Image Acquisition properties in a video stream. ote: Every match stick can burn up to 25 seconds.
  • 3. V. REASO S FOR PROPERTIES OF IMAGE ACQUISITIO TO DISOBEY When ever there is a queue of data trying to enter the processor of a system they hung up due to low in processing speed. This gives rise to an error which is known as “logging error between two frames”. One frames which waits to be logged on delays another frame to wait until it is processed. Once it is processed the waiting frame does not counts as the video stream is a dynamic system where data I logged on the basis of number of frame to be logged. For a perfect flow of data to log and to process the system understands few rules they are as follows. Following set of experiments is as shown with their corresponding results in Tabular forms. Variations to all needs are specifically noted and the results are tabulated. VI. LOGGI G OF TWO FRAMES The following three tables shows the set of results due to variation in either frame rate per second or variation in number of frames to be logged or constants in them. TABLE I EXPIRIME T O: 01 To study variations in frame rate per second and corresponding variations in number of frames to be logged. Experiment Frame umber Logging Average Percent o Rule umber: 01 Logging Rate = 1/logged Per Sec i.e. 30 fps = 1/30 log per sec i.e. 0.033 sec per frame Rule umber: 02 Average difference between each frame = EXPIRIME T CASE STUDY FOR ERROR A ALYSIS BETWEE Rate Log per second of frames logged rate Diff Error 1 30 300 0.03 0.1474 11.4098 2 3 25 20 250 200 0.04 0.05 0.1473 0.1458 10.7297 9.5844 4 5 6 15 10 5 first (n-1) frame ~ (n) second frame Rule umber: 03 Percentage Error = logging Rate-Average Difference * 100 If these errors are not noted then the results are those which you have just seen. ow let us solve those two experiments. Experiment umber: 01 umber of frames per second = 30 fps umbers of frames to be logged = 400 f Time based acquired frames = 146 f Average difference between two frames= 0.1456 sec Error between two frames (Average) = 11.2231 Time for match stick burnt (Approximate) = 20sec sec Then the exact time should be 146*0.112231+4.86= 21.245726 Sec [1] [2] Experiment umber: 02 umber of frames per second = 30 fps umbers of frames to be logged = 300 f Time based acquired frames = 155 f Average difference between two frames= 0.1464 sec Error between two frames (Average) = 11.3078 Time for match stick burnt (Approximate) = 23sec sec Then the exact time should be 155*0.113078+5.16= 22.68709 Sec ow seeing this result which shows up to nano decimal is astounding. These results can bring forth the Machine Vision system to its timing but the conclusion cannot be easily done. What if number of frames to be logged is 5 to 5000 or what if number of frames per second is 5 to 30? These kinds of questions need specific answers. For this several experiments are done to know the suitable answer. The variation in frames per second and variation in frames to be logged are constantly varied and errors in logging are studied. This showed many important factors which cannot be explained in singular terms. 150 0.06 0.1463 7.9682 100 0.1 0.1422 4.2212 50 0.2 0.1341 6.5878 TABLE II EXPIRIME T O: 02 To study constant frame rate per second and variation in number of frames to be logged. Experiment Frame umber Logging Average Percent Error o Rate of rate Diff Log frames per logged second 1 30 300 0.033 0.1474 11.4098 2 3 4 5 6 30 30 30 30 30 250 200 150 100 50 0.033 0.1472 11.3888 0.033 0.1469 11.3541 0.033 0.1454 11.2056 0.033 0.1438 11.0495 0.033 0.1348 10.1422 TABLE III EXPIRIME T O: 03 To study variations in frame rate per second and constant number of frames to be logged. Experiment Frame umber Logging Average Percent Error o Rate of rate Diff Log frames per logged second 1 30 100 0.03 0.1428 10.9444 2 25 100 0.04 0.1416 10.1646 3 20 100 0.05 0.1412 9.1232 4 15 100 0.06 0.1414 7.4768 5 10 100 0.1 0.1357 3.5737 6 5 100 0.2 0.1413 5.8747 Observe in each table the error percentage shown. What variation can anyone observe in them unless they are shown as in umerical values? Remember timing is a question to be answered. In each of the above three table logging rate is not our problem this is because intense image has larger logging rate as every thigh to be logged is in the rate of pixels. Then the error registered is our main picture. Averaging the logging
  • 4. rate revokes all our problems and to obtain perfect timing error percentage is calculated. Machine Vision, Computer Vision or Image Analysis of any Video stream requires logging of data if the data has error then the whole Analysis goes wrong. VII. CO CLUSIO EXPIRIME T O: 03 To study variations in frame rate per second and constant number of frames to be logged. 120 How can any one believe a match stick can burn only for 3 or 5 second? This shows even video stream registration need processing speed and this has to be concluded in standards. The above Tabulation results are now shown of graphical forms to conclude. Observe the below three bar chart which clarify all our doubt regarding image acquisition and its error incurred. 100 80 Frame Per Second 60 Percentage Error Number of frames logged 40 20 EXPIRIME T O: 01 To study variations in frame rate per second and corresponding variations in number of frames to be logged. 350 0 1 2 3 4 5 6 EXPIRIME T O: 01 CO CLUSIO 300 Higher frame rate per second and larger frames to be logged always shows higher error. Decreasing both simultaneously reduces error progressively. 250 200 Frames per second Percentage Error Number of frames logged 150 EXPIRIME T O: 02 CO CLUSIO Higher frame rate per second and higher frames to be logged always shows higher error. Decreasing only frame to be logged and making frames per second constant does not reduce error much. 100 50 0 EXPIRIME T O: 03 CO CLUSIO 1 2 3 4 5 6 Variation in frame rate per second and frames to be logged remains constant shows error reduction dramatically. EXPIRIME T O: 02 To study constant frame rate per second and variation in number of frames to be logged. 350 300 250 200 Frames per second Percentage Error Number of frames logged 150 100 50 0 1 2 3 4 5 6 Final conclusion makes one and all to remember, that acquiring image data from an video stream does not gives accurate timing unless error between logging of two subsequent frames are described. Increasing or decreasing frame rate per second or number of frames to be logged cannot be ignored. Machine Vision attached with Robotics should be able to detect and understand the multitude of tasks that can arise in “hard engineering” in industries, where low-tolerance features giving rise to highly variable images to be processed along with timing of work handling. What if the timing goes wrong then the learning ability of artificial intelligence will start to learn the trial and error method which humans have mastered it. The culture of developing Machine Vision or the Artificial Intelligence should not be blinded with error in Image acquisition and error in processing logged data. If logging data depends on processor speed then perfection in analysis of a Image from a streams of data has to answer average error in processing them, this will lead to a perfection.
  • 5. VIII. REFERE CES References are gathered through primary data of literature survey through internet sites like http://citeseerx.ist.psu.edu and by interacting with experts. References and readily available publications are used only for knowledge purpose but the experiments and proof for the report are self build. Samples of the collective formats for various types of references are given below. [1] A. Said, W. A. Pearlman, “A ew Fast and Efficient Image Codec Based on Set Partitioning in Hierarchical Trees”, IEEE Trans. Circuits and Syst. Video Technol., 6(3), 243-250 (1996). [2] P. Corriveau, A. Webster, "VQEG Evaluation of Objective Methods of Video Quality Assessment", SMPTE Journal, 108, 645-648, 1999. [3] A.M. Rohaly, P. Corriveau, J. Libert, A. Webster, V. Baroncini, J. Beerends, J.L Blin, L. Contin, T. Hamada, D. Harrison, A. Hekstra, J. Lubin, Y. ishida, R. ishihara, J. Pearson, A. F. Pessoa, . Pickford, A. Schertz, M. Visca, A. B. Watson, S. Winkler: "Video Quality Experts Group: Current results and future directions." Proc. SPIE Visual Communications and Image Processing, vol. 4067, Perth, Australia, June 21-23, 2000. [4] C. B. Lambrecht, Ed., “Special Issue on Image and Video Quality Metrics”, Signal Processing, vol. 70, (1998). [5] A. Said and W. A. Pearlman, “A new, fast, and efficient image codec based on set partitioning in hierarchical trees,” IEEE Trans. Circuits Syst. Video Technol., vol. 6, pp. 243-250, June 1993. [6] “Jpeg software codec.” Portable Research Video Group, Stanford University, 1997. Available via anonymous ftp from havefun.stanford.edu:pub/jpeg/JPEGv1.1.tar.Z. [7] J. Ashley, R. Barber, M. Flickner, J. Hafner, D. Lee, W. iblack, and D. Petkovic. Automatic and semi-automatic methods for image annotation and retrieval in QBIC. In W. iblack and R. C. Jain, editors, Storage and Retrieval for Image and Video Databases III. SPIE - The International Society for Optical Engineering, 1995 [8] A. Rav-Acha and S. Peleg. Restoration of multiple images with motion blur in different directions. IEEE Workshop on Applications of Computer Vision, 2000. [9] K. Cinkler and A. Mertins, Coding of digital video with the edgesensitive discrete wavelet transform," in IEEE International Conference on Image Processing, vol. 1, pp. 961-964, 1996. [10] J. Bach, C. Fuler, A. Gupta, A. Hampapur, B. Horowitz, R. Humphrey, R. Jain, and C. Shu, “The Virage Image Search Engine: An Open Framework for Image Management,” Proc. SPIE Conf. Storage and Retrieval for Image and Video Databases IV, vol. 2670, [11] pp. 76-87, 1996. R. J. Vidmar. (1992, Aug.). On the use of atmospheric plasmas as electromagnetic reflectors. IEEE Trans. Plasma Sci. [Online]. 21(3), pp. 876-880. Available: http://www.halcyon.com/pub/journals/21ps03-vidmar