SlideShare a Scribd company logo
IJRET: International Journal of Research in Engineering and Technology
__________________________________________________________
Volume: 03 Issue: 01 | Jan-2014, Available @
MOTION BASED ACTION RECOGNITION USING
Shikha
1
M.E. Student, Branch Digital Electronics, Department of Electronics and Tele
Maharaja College of Engineering, Shegaon, Sant Gadge Baba Amravati University, Maharashtra State, India.
2
Associate Professor, Branch Digital E
Gajanan Maharaja College of Engineering, Shegaon, Sant Gadge Baba Amravati University ,Maharashtra State,
Analyzing the actions of a person from a video by
topicin the area of computer vision. There
monitoring systems, human performance analysis, content
efficient applications are available of action recognition, the most active application domain in
is to “look at people”. In this paper, motion
human body and hence human actions can be effectively
shape. In the motion-based approach, the method that extracts
filtering or watershed transform are used for recognizing
analyzes human movements directly from
extraction and action recognition. In the first step
motion feature is extracted using optical flow
Keywords: Action Recognition, 2Dblob,
-----------------------------------------------------------------
1. INTRODUCTION
Recognizing or understanding the actions
from a video is the objective of action recognition.
The main objective of our method is to improve the
accuracy of recognition. Action recognition is classified in
four types: Object-level, Tracking-level, Pose
Activity-level. Object-level recognize the locations
object, Tracking-level recognize the object trajectories,
Pose-level recognize the pose of a person & Activity
recognize the activity of person.The major problems faced
in action recognition are: view-point variation
camera, temporal variation: Variation in duration and shift
spatial variation: Different people perform the
in different ways.n our paper we present a
eliminates all the above problems by using the concept of
optical flow and kNN.
2. TOOL
The tool used for recognition is MATLAB
7.10.0(R2010a)
3. DATASETS
The datasets used for Action Recognition are KTH and
Weizmann. [9], [12]
IJRET: International Journal of Research in Engineering and Technology eISSN: 2319
____________________________________________________________________________
, Available @ http://www.ijret.org
MOTION BASED ACTION RECOGNITION USING
NEIGHBOR
Shikha.A.Biswas1
, Vasant.N.Bhonge2
M.E. Student, Branch Digital Electronics, Department of Electronics and Tele-communication, Shri Sant Gajanan
Maharaja College of Engineering, Shegaon, Sant Gadge Baba Amravati University, Maharashtra State, India.
Associate Professor, Branch Digital Electronics, Department of Electronics and Tele-
Gajanan Maharaja College of Engineering, Shegaon, Sant Gadge Baba Amravati University ,Maharashtra State,
India.
Abstract
nalyzing the actions of a person from a video by using computer is termed as Action Recognition.
are many applications of this research which include surveillance systems, patient
an performance analysis, content-based image/video retrieval/storage, virtual reality.
action recognition, the most active application domain in
motion feature is extracted because motion features can portray
can be effectively recognized by motion rather than other features
method that extracts motion of the human action such as
used for recognizing action. This paper presents a novel method of action recognition
n movements directly from video. The overall system consists of major three steps: blob extraction, feature
the first step, the input video is preprocessed to extract the 2D
feature is extracted using optical flow and at last action is recognized using classifier k-Nearest Neighbor (kNN
2Dblob, Optical Flow, kNN
-------------------------------***----------------------------------------------------------
actions of a person
from a video is the objective of action recognition.
The main objective of our method is to improve the
is classified into
level, Pose-level &
level recognize the locations of
trajectories,
person & Activity-level
.The major problems faced
point variation: movement of
Variation in duration and shift,
perform the same action
n our paper we present a method that
eliminates all the above problems by using the concept of
is MATLAB, version
The datasets used for Action Recognition are KTH and
Weizmann
It contains 10 types of actions performed by 9 subjects.
Thus it contains total 90 AVI videos, taken with a static
camera and static background and with a frame rate of 25
fps. Actions in this dataset include: bend, jack, run, side,
skip, wave1, wave2, jump, P
KTH
It contains 6 types of actions performed by 25 subjects in 4
homogenous backgrounds. Thus it contains total 600 AVI
videos, taken with a static camera over homogenous
backgrounds & with a frame rate of 25 fps. Actions in
dataset include: walking, jogging, running, boxing, hand
waving, and hand clapping.
4. METHOD
The proposed method contains
recognition:
Fig-1: Stages of Action Reco
Blob
Extraction
Feature
Extraction
eISSN: 2319-1163 | pISSN: 2321-7308
_______________________
587
MOTION BASED ACTION RECOGNITION USING k-NEAREST
communication, Shri Sant Gajanan
Maharaja College of Engineering, Shegaon, Sant Gadge Baba Amravati University, Maharashtra State, India.
-communication, Shri Sant
Gajanan Maharaja College of Engineering, Shegaon, Sant Gadge Baba Amravati University ,Maharashtra State,
Recognition. This is an active research
are many applications of this research which include surveillance systems, patient
, virtual reality. Although many
action recognition, the most active application domain in the area of computer vision
can portray the moving direction of
features such as color, texture or
such as motion blob, optical flow, FIR-
novel method of action recognition that
overall system consists of major three steps: blob extraction, feature
extract the 2D blob. In the second step,
Nearest Neighbor (kNN).
----------------------------------------------------------
It contains 10 types of actions performed by 9 subjects.
Thus it contains total 90 AVI videos, taken with a static
camera and static background and with a frame rate of 25
fps. Actions in this dataset include: bend, jack, run, side,
ave2, jump, P-jump walk.
It contains 6 types of actions performed by 25 subjects in 4
homogenous backgrounds. Thus it contains total 600 AVI
videos, taken with a static camera over homogenous
backgrounds & with a frame rate of 25 fps. Actions in this
walking, jogging, running, boxing, hand
proposed method contains three main stages of
of Action Recognition
Extraction
Action
Recognition
IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 03 Issue: 01 | Jan-2014, Available @ http://www.ijret.org 588
4.1 Blob Extraction
The most commonly used low-level feature for identifying
human action is 2D blob. Hence the first stage is called blob
extraction or segmentation or pre-processing stage
In this stage, the color video is first converted from RGB to
gray and then finally to binary. To remove the salt and
pepper noise, the gray scale video is first median filtered
and then is converted into binary using autothresholding.
Thus this stage divide the input video into two classes i.e.
foreground (activity of the human) called the 2D blob, and
background (empty frame). Then for enhancement, dilation
is done. In the dilation process, the binary video is dilated
by a structuring element (called mask or window) of size 3 x
3, 5 x 5 or 7 x 7.The structuring element of the proposed
method is shown below:
0 0 1 0 0
0 0 1 0 0
1 1 1 1 1
0 0 1 0 0
0 0 1 0 0
After dilation enhanced 2D blob is obtained. For fast and
extraction of the ‘motion’ feature, enhancement is done. [2],
[5], [11]easy
The blob extraction is shown below:
Input frame dilated frame
Fig-2: Blob extraction
4.2 Feature Extraction
After segmentation process, the next stage is feature
extraction. In this stage mid-level feature ‘motion’ is
extracted from the blob. Since the human action can be
effectively characterized by motion rather than other
features such as color, texture or shape, ‘motion’ feature is
extracted from the blob.We use optical flow to estimate
motion. Optical flow estimates the direction and speed of
moving object from one video frame to another. There are
two methods to find the Optical flow Horn-Schunck or
Lucas-Kanade method. For floating point input Horn-
Schunck method is used and for fixed point input Lucas-
Kanade method is used. We use Lucas-Kanade method to
find the optical flow. [1], [7], [8]
The optical flow of input video frame is shown below:
Input frame dilated frame optical flow
Fig-3: Feature Extraction
4.3 Action Recognition
This is the last stage of action recognition. For recognition,
we use k Nearest Neighbor (kNN). [1], [3], [4], [6]
kNN is a standard classifier which is mostly used for
action recognition because it does not require any learning
process and also it is invariant against view-point, spatial
and temporal variations.
Before classification using kNN, the proposed method
computes the following steps:
i) First the centroid of the connected region in the optical
flow is computed. This is called Blob analysis.
ii) The dimension of Blob analysis is very large; hence to
reduce its dimension Principal Component Analysis (PCA)
is done. PCA is a technique which reduces the dimension of
large data sets.
iii) Then covariance matrix of PCA is found.
iv) Then eigenvalues (EVA) of the covariance matrix are
found.EVA measure the magnitude of the corresponding
relative motion. This is called the training data for
classification.
v) Then the nearest neighbor in the training data is searched
using the distance metric ‘Euclidean distance’ of kNN.
vi) After k-Nearest Neighbor (kNN) search, they are
classified using kNN classifier. If the 1-NN is obtained for
each action in the dataset then it means that the action is
recognized.
vii) The result is also plotted using function ‘gscatter’ to
observe the classification.
IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 03 Issue: 01 | Jan-2014, Available @ http://www.ijret.org 589
5. FLOWCHART OF THE PROPOSED
METHOD
6. RESULTS
Input video
Dilated frame
Optical flow
Recognized output
Fig – 4: Recognition Result
Input Video
Blob Extraction
Optical Flow
Blob Analysis
PCA
Find the
covariance matrix
Find its Eigen
values
kNN search
kNN classify
Recognized
output
IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308
_______________________________________________________________________________________
Volume: 03 Issue: 01 | Jan-2014, Available @ http://www.ijret.org 590
7 DISCUSSION
By using the proposed method the accuracy of recognition
are shown below:
Table 1.Recognition using KTH Dataset
Type of
sequence
Total
seq
Correctly
Recognized
In %
walking 10 10 100
running 10 100
hand waving 10 10 100
handclapping 10 10 100
Boxing 10 10 100
jogging 10 10 100
Σ = 60 Σ = 60 Avg =100
Average % of accuracy using KTH dataset is 100%.
Table 2.Recognition using Weizmann Dataset
Type of
sequence
Total seq Correctly
Recognized
In %
walk 9 9 100
run 9 9 100
jack 9 9 100
skip 9 9 100
side 9 9 100
bend 9 9 100
jump 9 9 100
pjump 9 9 100
wave 1 9 9 100
wave 2 9 9 100
Σ = 90 Σ = 90 Avg =100
Average % of accuracy using Weizmann dataset is 100%.
8. CONCLUSION
This paper has presented a motion-based approach for action
recognition. It has used 2D blob as low-level feature and
extracts mid-level feature ‘motion’ from the blob using the
method Lucas Kanade of optical flow. The motion features
so obtained are classified using kNN classifier. The
advantage of using kNN is that it does not require any
learning process and also it is invariant against view-point,
temporal and spatial variations; hence its accuracy is good.
The average % of accuracy using the proposed method is
100% on Weizmann and KTH datasets.
REFERENCES
[1] S. Hari Kumar, P.Sivaprakash, “New Approach for
Action Recognition Using Motion based Features”,
Proceedings of 2013 IEEE Conference on Information and
Communication Technologies (ICT 2013), pp.1247-1252.
[2] Hetal Shah, N. C. Chauhan, “Recognition of Human
Actions in Video”, International Journal on Recent and
Innovation Trends in Computing and Communication
(IJRITCC) May 2013, ISSN 2321 -8169Volume 1,Issue 5,
pp.489 – 493.
[3] Xiaodong Yang and YingLi Tian, “Eigen Joints-based
Action Recognition Using Naïve-Bayes-Nearest-Neighbor”,
2012 IEEE, pp.14-19.
[4] Mi Zhang Alexander A. Sawchuk, “Motion Primitive-
Based Human Activity Recognition Using a Bag-of-
Features Approach”, IHI’12, January 28–30, 2012, Miami,
Florida, USA.
[5] Muhammad Hameed Siddiqi, Muhammad Fahim,
Sungyoung Lee, Young-Koo Lee, “ Human Activity
Recognition Based on Morphological Dilation followed by
Watershed Transformation Method ”, 2010 International
Conference on Electronics and Information Engineering
(ICEIE 2010),Volume 2, 2010 IEEE,V2 433-V2 437.
[6] Ronald Poppe, “A survey on vision-based human action
recognition”, Science Direct Image and Vision Computing
28 (2010) 976–990.
[7] Mohiuddin Ahmad, Seong-Whan Lee, “Human action
recognition using shape and CLG-(Combined local-global)
motion flow from multi-view image sequences”, Science
Direct Pattern Recognition 41 (2008), 2237 – 2252.
[8] Mohiuddin Ahmad and Seong-Whan Lee, “HMM-based
Human Action Recognition Using Multiview Image
Sequences”, Proceedings of the 18th International
Conference on Pattern Recognition (ICPR'06), 2006 IEEE.
[9] Moshe Blank, Lena Gorelick, Eli Shechtman, Michal
Irani, Ronen Basri, “ Actions as space–time shapes”,
Proceedings of the International Conference On Computer
Vision (ICCV’05), vol. 2, Beijing, China, October 2005, pp.
1395– 1402.
[10] O. Masoud and N. Papanikolopoulos, “A method for
human action recognition,” IVC, Vol. 21, 2003, pp.729-743.
[11] J. K. Aggarwal and Q. Cai, “Human Motion Analysis:
A Review”, idealibrary: Computer Vision and Image
Understanding,Vol. 73, No. 3, March 1999, pp. 428–440.
[12] http://www.nada.kth.se/cvap/actions

More Related Content

PDF
IRJET- A Review Analysis to Detect an Object in Video Surveillance System
PDF
Threshold based filtering technique for efficient moving object detection and...
PDF
22 29 dec16 8nov16 13272 28268-1-ed(edit)
PDF
Embedded Implementations of Real Time Video Stabilization Mechanisms A Compre...
PDF
Event Detection Using Background Subtraction For Surveillance Systems
PDF
IRJET- Tracking and Recognition of Multiple Human and Non-Human Activites
PDF
IRJET - Human Eye Pupil Detection Technique using Center of Gravity Method
PDF
Gait Recognition using MDA, LDA, BPNN and SVM
IRJET- A Review Analysis to Detect an Object in Video Surveillance System
Threshold based filtering technique for efficient moving object detection and...
22 29 dec16 8nov16 13272 28268-1-ed(edit)
Embedded Implementations of Real Time Video Stabilization Mechanisms A Compre...
Event Detection Using Background Subtraction For Surveillance Systems
IRJET- Tracking and Recognition of Multiple Human and Non-Human Activites
IRJET - Human Eye Pupil Detection Technique using Center of Gravity Method
Gait Recognition using MDA, LDA, BPNN and SVM

What's hot (19)

PDF
Transform Domain Based Iris Recognition using EMD and FFT
PDF
International Journal of Engineering Research and Development (IJERD)
PDF
IRJET- Video Forgery Detection using Machine Learning
PDF
Analysis of Human Behavior Based On Centroid and Treading Track
PDF
G01114650
PDF
IRJET- Autonamy of Attendence using Face Recognition
PDF
[IJET-V1I3P20] Authors:Prof. D.S.Patil, Miss. R.B.Khanderay, Prof.Teena Padvi.
PDF
International Journal of Computational Engineering Research(IJCER)
PDF
The Biometric Algorithm based on Fusion of DWT Frequency Components of Enhanc...
PDF
IRDO: Iris Recognition by fusion of DTCWT and OLBP
PDF
Development of Human Tracking in Video Surveillance System for Activity Anal...
PDF
MULTI SCALE ICA BASED IRIS RECOGNITION USING BSIF AND HOG
PDF
IRJET - Automated Fraud Detection Framework in Examination Halls
PDF
I017525560
PDF
Discovering Anomalies Based on Saliency Detection and Segmentation in Surveil...
PDF
Volume 2-issue-6-1974-1978
PDF
IRJET- Convenience Improvement for Graphical Interface using Gesture Dete...
PDF
Evaluation of Iris Recognition System on Multiple Feature Extraction Algorith...
PDF
IRJET- Full Body Motion Detection and Surveillance System Application
Transform Domain Based Iris Recognition using EMD and FFT
International Journal of Engineering Research and Development (IJERD)
IRJET- Video Forgery Detection using Machine Learning
Analysis of Human Behavior Based On Centroid and Treading Track
G01114650
IRJET- Autonamy of Attendence using Face Recognition
[IJET-V1I3P20] Authors:Prof. D.S.Patil, Miss. R.B.Khanderay, Prof.Teena Padvi.
International Journal of Computational Engineering Research(IJCER)
The Biometric Algorithm based on Fusion of DWT Frequency Components of Enhanc...
IRDO: Iris Recognition by fusion of DTCWT and OLBP
Development of Human Tracking in Video Surveillance System for Activity Anal...
MULTI SCALE ICA BASED IRIS RECOGNITION USING BSIF AND HOG
IRJET - Automated Fraud Detection Framework in Examination Halls
I017525560
Discovering Anomalies Based on Saliency Detection and Segmentation in Surveil...
Volume 2-issue-6-1974-1978
IRJET- Convenience Improvement for Graphical Interface using Gesture Dete...
Evaluation of Iris Recognition System on Multiple Feature Extraction Algorith...
IRJET- Full Body Motion Detection and Surveillance System Application
Ad

Viewers also liked (20)

PDF
A new dynamic single row routing for channel
PDF
Wound epithelization model by 3 d imaging
PDF
Disaster recovery sustainable housing
PDF
Implementation of sql server based on sqlite engine on
PDF
Generating three dimensional photo-realistic model of
PDF
IJRET : International Journal of Research in Engineering and TechnologyOptimi...
PDF
Detection of hazard prone areas in the upper himalayan region in gis environment
PDF
Low power fpga solution for dab audio decoder
PDF
Invalidating vulnerable broadcaster nodes using
PDF
5 s implementation and its effect on physical workload
PDF
Behaviour of interfaces between carbon fibre reinforced polymer and gravel soils
PDF
Improved block based segmentation for jpeg
PDF
Fuel wastage and emission due to idling of vehicles at
PDF
A survey report for performance analysis of finite
PDF
Effects of aging time on mechanical properties of sand cast al 4.5 cu alloy
PDF
Performance analysis of ml and mmse decoding using
PDF
Image retrieval based on feature selection method
PDF
A novel approach to record sound
PDF
Comparative review study of security of aran and aodv routing protocols in ma...
PDF
Smart phone for elderly populace
A new dynamic single row routing for channel
Wound epithelization model by 3 d imaging
Disaster recovery sustainable housing
Implementation of sql server based on sqlite engine on
Generating three dimensional photo-realistic model of
IJRET : International Journal of Research in Engineering and TechnologyOptimi...
Detection of hazard prone areas in the upper himalayan region in gis environment
Low power fpga solution for dab audio decoder
Invalidating vulnerable broadcaster nodes using
5 s implementation and its effect on physical workload
Behaviour of interfaces between carbon fibre reinforced polymer and gravel soils
Improved block based segmentation for jpeg
Fuel wastage and emission due to idling of vehicles at
A survey report for performance analysis of finite
Effects of aging time on mechanical properties of sand cast al 4.5 cu alloy
Performance analysis of ml and mmse decoding using
Image retrieval based on feature selection method
A novel approach to record sound
Comparative review study of security of aran and aodv routing protocols in ma...
Smart phone for elderly populace
Ad

Similar to Motion based action recognition using k nearest neighbor (20)

PDF
Motion based action recognition using k nearest neighbor
PDF
Flow Trajectory Approach for Human Action Recognition
PDF
A COMPARATIVE STUDY ON HUMAN ACTION RECOGNITION USING MULTIPLE SKELETAL FEATU...
PDF
A COMPARATIVE STUDY ON HUMAN ACTION RECOGNITION USING MULTIPLE SKELETAL FEATU...
PDF
IRJET- Recurrent Neural Network for Human Action Recognition using Star S...
PDF
E03404025032
PPTX
A general survey of previous works on action recognition
PPTX
Silhouette analysis based action recognition via exploiting human poses
PPTX
Human Action Recognition in Videos Employing 2DPCA on 2DHOOF and Radon Transform
PDF
Volume 2-issue-6-1960-1964
PDF
Volume 2-issue-6-1960-1964
PDF
Human Action Recognition Using Deep Learning
PDF
IRJET- Recognition of Human Action Interaction using Motion History Image
PPTX
PDF
Inspection of Suspicious Human Activity in the Crowd Sourced Areas Captured i...
PPTX
Human Action Recognition Based on Spacio-temporal features-Poster
PDF
HUMAN ACTION RECOGNITION IN VIDEOS USING STABLE FEATURES
PPT
Human Action Recognition Based on Spacio-temporal features
PDF
A COMPARATIVE STUDY ON HUMAN ACTION RECOGNITION USING MULTIPLE SKELETAL FEATU...
PPTX
seminar Islideshow.pptx
Motion based action recognition using k nearest neighbor
Flow Trajectory Approach for Human Action Recognition
A COMPARATIVE STUDY ON HUMAN ACTION RECOGNITION USING MULTIPLE SKELETAL FEATU...
A COMPARATIVE STUDY ON HUMAN ACTION RECOGNITION USING MULTIPLE SKELETAL FEATU...
IRJET- Recurrent Neural Network for Human Action Recognition using Star S...
E03404025032
A general survey of previous works on action recognition
Silhouette analysis based action recognition via exploiting human poses
Human Action Recognition in Videos Employing 2DPCA on 2DHOOF and Radon Transform
Volume 2-issue-6-1960-1964
Volume 2-issue-6-1960-1964
Human Action Recognition Using Deep Learning
IRJET- Recognition of Human Action Interaction using Motion History Image
Inspection of Suspicious Human Activity in the Crowd Sourced Areas Captured i...
Human Action Recognition Based on Spacio-temporal features-Poster
HUMAN ACTION RECOGNITION IN VIDEOS USING STABLE FEATURES
Human Action Recognition Based on Spacio-temporal features
A COMPARATIVE STUDY ON HUMAN ACTION RECOGNITION USING MULTIPLE SKELETAL FEATU...
seminar Islideshow.pptx

More from eSAT Publishing House (20)

PDF
Likely impacts of hudhud on the environment of visakhapatnam
PDF
Impact of flood disaster in a drought prone area – case study of alampur vill...
PDF
Hudhud cyclone – a severe disaster in visakhapatnam
PDF
Groundwater investigation using geophysical methods a case study of pydibhim...
PDF
Flood related disasters concerned to urban flooding in bangalore, india
PDF
Enhancing post disaster recovery by optimal infrastructure capacity building
PDF
Effect of lintel and lintel band on the global performance of reinforced conc...
PDF
Wind damage to trees in the gitam university campus at visakhapatnam by cyclo...
PDF
Wind damage to buildings, infrastrucuture and landscape elements along the be...
PDF
Shear strength of rc deep beam panels – a review
PDF
Role of voluntary teams of professional engineers in dissater management – ex...
PDF
Risk analysis and environmental hazard management
PDF
Review study on performance of seismically tested repaired shear walls
PDF
Monitoring and assessment of air quality with reference to dust particles (pm...
PDF
Low cost wireless sensor networks and smartphone applications for disaster ma...
PDF
Coastal zones – seismic vulnerability an analysis from east coast of india
PDF
Can fracture mechanics predict damage due disaster of structures
PDF
Assessment of seismic susceptibility of rc buildings
PDF
A geophysical insight of earthquake occurred on 21 st may 2014 off paradip, b...
PDF
Effect of hudhud cyclone on the development of visakhapatnam as smart and gre...
Likely impacts of hudhud on the environment of visakhapatnam
Impact of flood disaster in a drought prone area – case study of alampur vill...
Hudhud cyclone – a severe disaster in visakhapatnam
Groundwater investigation using geophysical methods a case study of pydibhim...
Flood related disasters concerned to urban flooding in bangalore, india
Enhancing post disaster recovery by optimal infrastructure capacity building
Effect of lintel and lintel band on the global performance of reinforced conc...
Wind damage to trees in the gitam university campus at visakhapatnam by cyclo...
Wind damage to buildings, infrastrucuture and landscape elements along the be...
Shear strength of rc deep beam panels – a review
Role of voluntary teams of professional engineers in dissater management – ex...
Risk analysis and environmental hazard management
Review study on performance of seismically tested repaired shear walls
Monitoring and assessment of air quality with reference to dust particles (pm...
Low cost wireless sensor networks and smartphone applications for disaster ma...
Coastal zones – seismic vulnerability an analysis from east coast of india
Can fracture mechanics predict damage due disaster of structures
Assessment of seismic susceptibility of rc buildings
A geophysical insight of earthquake occurred on 21 st may 2014 off paradip, b...
Effect of hudhud cyclone on the development of visakhapatnam as smart and gre...

Recently uploaded (20)

PPTX
Internet of Things (IOT) - A guide to understanding
PPTX
web development for engineering and engineering
PDF
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
PPTX
UNIT 4 Total Quality Management .pptx
DOCX
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
PDF
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
PPTX
MCN 401 KTU-2019-PPE KITS-MODULE 2.pptx
PPTX
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
PPT
Mechanical Engineering MATERIALS Selection
PDF
Embodied AI: Ushering in the Next Era of Intelligent Systems
PDF
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
PDF
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
PPTX
Lecture Notes Electrical Wiring System Components
PDF
Automation-in-Manufacturing-Chapter-Introduction.pdf
PPTX
Construction Project Organization Group 2.pptx
PPTX
additive manufacturing of ss316l using mig welding
PPTX
Sustainable Sites - Green Building Construction
PDF
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
PPTX
OOP with Java - Java Introduction (Basics)
PPT
Project quality management in manufacturing
Internet of Things (IOT) - A guide to understanding
web development for engineering and engineering
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
UNIT 4 Total Quality Management .pptx
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
MCN 401 KTU-2019-PPE KITS-MODULE 2.pptx
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
Mechanical Engineering MATERIALS Selection
Embodied AI: Ushering in the Next Era of Intelligent Systems
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
Lecture Notes Electrical Wiring System Components
Automation-in-Manufacturing-Chapter-Introduction.pdf
Construction Project Organization Group 2.pptx
additive manufacturing of ss316l using mig welding
Sustainable Sites - Green Building Construction
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
OOP with Java - Java Introduction (Basics)
Project quality management in manufacturing

Motion based action recognition using k nearest neighbor

  • 1. IJRET: International Journal of Research in Engineering and Technology __________________________________________________________ Volume: 03 Issue: 01 | Jan-2014, Available @ MOTION BASED ACTION RECOGNITION USING Shikha 1 M.E. Student, Branch Digital Electronics, Department of Electronics and Tele Maharaja College of Engineering, Shegaon, Sant Gadge Baba Amravati University, Maharashtra State, India. 2 Associate Professor, Branch Digital E Gajanan Maharaja College of Engineering, Shegaon, Sant Gadge Baba Amravati University ,Maharashtra State, Analyzing the actions of a person from a video by topicin the area of computer vision. There monitoring systems, human performance analysis, content efficient applications are available of action recognition, the most active application domain in is to “look at people”. In this paper, motion human body and hence human actions can be effectively shape. In the motion-based approach, the method that extracts filtering or watershed transform are used for recognizing analyzes human movements directly from extraction and action recognition. In the first step motion feature is extracted using optical flow Keywords: Action Recognition, 2Dblob, ----------------------------------------------------------------- 1. INTRODUCTION Recognizing or understanding the actions from a video is the objective of action recognition. The main objective of our method is to improve the accuracy of recognition. Action recognition is classified in four types: Object-level, Tracking-level, Pose Activity-level. Object-level recognize the locations object, Tracking-level recognize the object trajectories, Pose-level recognize the pose of a person & Activity recognize the activity of person.The major problems faced in action recognition are: view-point variation camera, temporal variation: Variation in duration and shift spatial variation: Different people perform the in different ways.n our paper we present a eliminates all the above problems by using the concept of optical flow and kNN. 2. TOOL The tool used for recognition is MATLAB 7.10.0(R2010a) 3. DATASETS The datasets used for Action Recognition are KTH and Weizmann. [9], [12] IJRET: International Journal of Research in Engineering and Technology eISSN: 2319 ____________________________________________________________________________ , Available @ http://www.ijret.org MOTION BASED ACTION RECOGNITION USING NEIGHBOR Shikha.A.Biswas1 , Vasant.N.Bhonge2 M.E. Student, Branch Digital Electronics, Department of Electronics and Tele-communication, Shri Sant Gajanan Maharaja College of Engineering, Shegaon, Sant Gadge Baba Amravati University, Maharashtra State, India. Associate Professor, Branch Digital Electronics, Department of Electronics and Tele- Gajanan Maharaja College of Engineering, Shegaon, Sant Gadge Baba Amravati University ,Maharashtra State, India. Abstract nalyzing the actions of a person from a video by using computer is termed as Action Recognition. are many applications of this research which include surveillance systems, patient an performance analysis, content-based image/video retrieval/storage, virtual reality. action recognition, the most active application domain in motion feature is extracted because motion features can portray can be effectively recognized by motion rather than other features method that extracts motion of the human action such as used for recognizing action. This paper presents a novel method of action recognition n movements directly from video. The overall system consists of major three steps: blob extraction, feature the first step, the input video is preprocessed to extract the 2D feature is extracted using optical flow and at last action is recognized using classifier k-Nearest Neighbor (kNN 2Dblob, Optical Flow, kNN -------------------------------***---------------------------------------------------------- actions of a person from a video is the objective of action recognition. The main objective of our method is to improve the is classified into level, Pose-level & level recognize the locations of trajectories, person & Activity-level .The major problems faced point variation: movement of Variation in duration and shift, perform the same action n our paper we present a method that eliminates all the above problems by using the concept of is MATLAB, version The datasets used for Action Recognition are KTH and Weizmann It contains 10 types of actions performed by 9 subjects. Thus it contains total 90 AVI videos, taken with a static camera and static background and with a frame rate of 25 fps. Actions in this dataset include: bend, jack, run, side, skip, wave1, wave2, jump, P KTH It contains 6 types of actions performed by 25 subjects in 4 homogenous backgrounds. Thus it contains total 600 AVI videos, taken with a static camera over homogenous backgrounds & with a frame rate of 25 fps. Actions in dataset include: walking, jogging, running, boxing, hand waving, and hand clapping. 4. METHOD The proposed method contains recognition: Fig-1: Stages of Action Reco Blob Extraction Feature Extraction eISSN: 2319-1163 | pISSN: 2321-7308 _______________________ 587 MOTION BASED ACTION RECOGNITION USING k-NEAREST communication, Shri Sant Gajanan Maharaja College of Engineering, Shegaon, Sant Gadge Baba Amravati University, Maharashtra State, India. -communication, Shri Sant Gajanan Maharaja College of Engineering, Shegaon, Sant Gadge Baba Amravati University ,Maharashtra State, Recognition. This is an active research are many applications of this research which include surveillance systems, patient , virtual reality. Although many action recognition, the most active application domain in the area of computer vision can portray the moving direction of features such as color, texture or such as motion blob, optical flow, FIR- novel method of action recognition that overall system consists of major three steps: blob extraction, feature extract the 2D blob. In the second step, Nearest Neighbor (kNN). ---------------------------------------------------------- It contains 10 types of actions performed by 9 subjects. Thus it contains total 90 AVI videos, taken with a static camera and static background and with a frame rate of 25 fps. Actions in this dataset include: bend, jack, run, side, ave2, jump, P-jump walk. It contains 6 types of actions performed by 25 subjects in 4 homogenous backgrounds. Thus it contains total 600 AVI videos, taken with a static camera over homogenous backgrounds & with a frame rate of 25 fps. Actions in this walking, jogging, running, boxing, hand proposed method contains three main stages of of Action Recognition Extraction Action Recognition
  • 2. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308 _______________________________________________________________________________________ Volume: 03 Issue: 01 | Jan-2014, Available @ http://www.ijret.org 588 4.1 Blob Extraction The most commonly used low-level feature for identifying human action is 2D blob. Hence the first stage is called blob extraction or segmentation or pre-processing stage In this stage, the color video is first converted from RGB to gray and then finally to binary. To remove the salt and pepper noise, the gray scale video is first median filtered and then is converted into binary using autothresholding. Thus this stage divide the input video into two classes i.e. foreground (activity of the human) called the 2D blob, and background (empty frame). Then for enhancement, dilation is done. In the dilation process, the binary video is dilated by a structuring element (called mask or window) of size 3 x 3, 5 x 5 or 7 x 7.The structuring element of the proposed method is shown below: 0 0 1 0 0 0 0 1 0 0 1 1 1 1 1 0 0 1 0 0 0 0 1 0 0 After dilation enhanced 2D blob is obtained. For fast and extraction of the ‘motion’ feature, enhancement is done. [2], [5], [11]easy The blob extraction is shown below: Input frame dilated frame Fig-2: Blob extraction 4.2 Feature Extraction After segmentation process, the next stage is feature extraction. In this stage mid-level feature ‘motion’ is extracted from the blob. Since the human action can be effectively characterized by motion rather than other features such as color, texture or shape, ‘motion’ feature is extracted from the blob.We use optical flow to estimate motion. Optical flow estimates the direction and speed of moving object from one video frame to another. There are two methods to find the Optical flow Horn-Schunck or Lucas-Kanade method. For floating point input Horn- Schunck method is used and for fixed point input Lucas- Kanade method is used. We use Lucas-Kanade method to find the optical flow. [1], [7], [8] The optical flow of input video frame is shown below: Input frame dilated frame optical flow Fig-3: Feature Extraction 4.3 Action Recognition This is the last stage of action recognition. For recognition, we use k Nearest Neighbor (kNN). [1], [3], [4], [6] kNN is a standard classifier which is mostly used for action recognition because it does not require any learning process and also it is invariant against view-point, spatial and temporal variations. Before classification using kNN, the proposed method computes the following steps: i) First the centroid of the connected region in the optical flow is computed. This is called Blob analysis. ii) The dimension of Blob analysis is very large; hence to reduce its dimension Principal Component Analysis (PCA) is done. PCA is a technique which reduces the dimension of large data sets. iii) Then covariance matrix of PCA is found. iv) Then eigenvalues (EVA) of the covariance matrix are found.EVA measure the magnitude of the corresponding relative motion. This is called the training data for classification. v) Then the nearest neighbor in the training data is searched using the distance metric ‘Euclidean distance’ of kNN. vi) After k-Nearest Neighbor (kNN) search, they are classified using kNN classifier. If the 1-NN is obtained for each action in the dataset then it means that the action is recognized. vii) The result is also plotted using function ‘gscatter’ to observe the classification.
  • 3. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308 _______________________________________________________________________________________ Volume: 03 Issue: 01 | Jan-2014, Available @ http://www.ijret.org 589 5. FLOWCHART OF THE PROPOSED METHOD 6. RESULTS Input video Dilated frame Optical flow Recognized output Fig – 4: Recognition Result Input Video Blob Extraction Optical Flow Blob Analysis PCA Find the covariance matrix Find its Eigen values kNN search kNN classify Recognized output
  • 4. IJRET: International Journal of Research in Engineering and Technology eISSN: 2319-1163 | pISSN: 2321-7308 _______________________________________________________________________________________ Volume: 03 Issue: 01 | Jan-2014, Available @ http://www.ijret.org 590 7 DISCUSSION By using the proposed method the accuracy of recognition are shown below: Table 1.Recognition using KTH Dataset Type of sequence Total seq Correctly Recognized In % walking 10 10 100 running 10 100 hand waving 10 10 100 handclapping 10 10 100 Boxing 10 10 100 jogging 10 10 100 Σ = 60 Σ = 60 Avg =100 Average % of accuracy using KTH dataset is 100%. Table 2.Recognition using Weizmann Dataset Type of sequence Total seq Correctly Recognized In % walk 9 9 100 run 9 9 100 jack 9 9 100 skip 9 9 100 side 9 9 100 bend 9 9 100 jump 9 9 100 pjump 9 9 100 wave 1 9 9 100 wave 2 9 9 100 Σ = 90 Σ = 90 Avg =100 Average % of accuracy using Weizmann dataset is 100%. 8. CONCLUSION This paper has presented a motion-based approach for action recognition. It has used 2D blob as low-level feature and extracts mid-level feature ‘motion’ from the blob using the method Lucas Kanade of optical flow. The motion features so obtained are classified using kNN classifier. The advantage of using kNN is that it does not require any learning process and also it is invariant against view-point, temporal and spatial variations; hence its accuracy is good. The average % of accuracy using the proposed method is 100% on Weizmann and KTH datasets. REFERENCES [1] S. Hari Kumar, P.Sivaprakash, “New Approach for Action Recognition Using Motion based Features”, Proceedings of 2013 IEEE Conference on Information and Communication Technologies (ICT 2013), pp.1247-1252. [2] Hetal Shah, N. C. Chauhan, “Recognition of Human Actions in Video”, International Journal on Recent and Innovation Trends in Computing and Communication (IJRITCC) May 2013, ISSN 2321 -8169Volume 1,Issue 5, pp.489 – 493. [3] Xiaodong Yang and YingLi Tian, “Eigen Joints-based Action Recognition Using Naïve-Bayes-Nearest-Neighbor”, 2012 IEEE, pp.14-19. [4] Mi Zhang Alexander A. Sawchuk, “Motion Primitive- Based Human Activity Recognition Using a Bag-of- Features Approach”, IHI’12, January 28–30, 2012, Miami, Florida, USA. [5] Muhammad Hameed Siddiqi, Muhammad Fahim, Sungyoung Lee, Young-Koo Lee, “ Human Activity Recognition Based on Morphological Dilation followed by Watershed Transformation Method ”, 2010 International Conference on Electronics and Information Engineering (ICEIE 2010),Volume 2, 2010 IEEE,V2 433-V2 437. [6] Ronald Poppe, “A survey on vision-based human action recognition”, Science Direct Image and Vision Computing 28 (2010) 976–990. [7] Mohiuddin Ahmad, Seong-Whan Lee, “Human action recognition using shape and CLG-(Combined local-global) motion flow from multi-view image sequences”, Science Direct Pattern Recognition 41 (2008), 2237 – 2252. [8] Mohiuddin Ahmad and Seong-Whan Lee, “HMM-based Human Action Recognition Using Multiview Image Sequences”, Proceedings of the 18th International Conference on Pattern Recognition (ICPR'06), 2006 IEEE. [9] Moshe Blank, Lena Gorelick, Eli Shechtman, Michal Irani, Ronen Basri, “ Actions as space–time shapes”, Proceedings of the International Conference On Computer Vision (ICCV’05), vol. 2, Beijing, China, October 2005, pp. 1395– 1402. [10] O. Masoud and N. Papanikolopoulos, “A method for human action recognition,” IVC, Vol. 21, 2003, pp.729-743. [11] J. K. Aggarwal and Q. Cai, “Human Motion Analysis: A Review”, idealibrary: Computer Vision and Image Understanding,Vol. 73, No. 3, March 1999, pp. 428–440. [12] http://www.nada.kth.se/cvap/actions