SlideShare a Scribd company logo
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 05 Issue: 06 | June -2018 www.irjet.net p-ISSN: 2395-0072
© 2018, IRJET | Impact Factor value: 6.171 | ISO 9001:2008 Certified Journal | Page 1292
Implementation of Image processing using augmented reality
Konjengbam Jackichand Singh 1, L.P.Saikia2
1 MTech Computer Sc & Engg , Assam Downtown University, India
2 Professor, Computer Sc& Engg , Assam downtown University, India
---------------------------------------------------------------------***---------------------------------------------------------------------
Abstract - Augmented reality is the new paradigm shift in
this world of ever changing technologicalmarvels. Augmented
reality is the integration of virtual elements in real world
environment. It is a new perspective of looking at information
in conjunction with the real world. In this paper, we proposed
to develop a simple mobile educational application using
augmented reality resources. The proposed system will use
different image processing liketextrecognition, markerbased,
marker less, cloud based, model tracking.
Key Words: Augmented Reality, Educational system,
Image Processing, MAR (Mobile Augmented Reality)
1. INTRODUCTION
The new age of technological development is further
changing the education environment from the traditional
classroom scene with blackboard and chalk to smart
education appliances such as projectors, teaching aids,
tablets, e-book reader offering vast opportunities of
information and various challenges to the student and the
teacher. Following along this, researches are conducted to
introduce a new way of teaching and learning experiences.
Among these is the possibilities of bringing augmented
reality to pedagogical applications, providing user with
visualization, interaction and experimentation of the
educational objects.
Through Augmented reality, it is possible to further enhance
the learning experiences by creating a more enhanced level
of user engagement through the visualization of the 3D
objects interposed to real ones.
As augmented reality is recognizedintheeducational aspect,
researcher are finding new strategies to improve the
learning experience. Considering this, Nincarean et al
proposed that augmented reality efficiency be improved by
combining with other technologies like the mobile device
hence incorporating a lot of useful features such as
portability, social connection, connectivity and individuality
thus a new concept is formed called Mobile Augmented
Reality(MAR).
2. Augmented Reality
Augmented reality is the integration of the virtual world
and the real world in which the virtual element is present
along with the real world. To achieve a more deeper
understanding of the contrast between augmented reality
and virtual reality. Milgram and kishino implemented a
continuum called the Milgram Virtual Continuum. They
defined it as the real world “augmented” by means of virtual
elements. Virtual Reality enables the user to fully immersed
in the three-dimensional virtual environment while
augmented reality aims to combine the elements of virtual
and real elements.
Fig 2.1: Milgram’s Virtual Continuum
As computer increases in processing and its size becomes
smaller, mobile devices or wearables becomes increasingly
feasible and people are able to access the online world at any
place at any time. This new flexibility of the mobile devices
allows a new breed of applications that can change the
paradigmoftechnology.Augmentedrealityapplicationadded
on the mobile devices provides portability, social
connectivity, individuality as additional features.
2.1. Image Processing
Various types of image processing is used in augmented
reality for a lot of different input sources. According to the
input sources various output can be generated.
2.1.1. Text Recognition
Text Recognition otherwise referred to as OCR(Optical
Character Reader). It involves asystemcapableoftranslating
an image of text into machine editable text or into standard
encoding scheme representing each one of them. Its real
application is in bank, offices, post office, etc where image of
documents needs to be converted into an editabledocument.
The Steps are:
1. Preprocessing: It is the first step in processing of
the scanned image. The scanned image is then
checked for noise, skew and slant. After that the
scanned image is converted into grayscale and then
turn into binary.
2. Segmentation: After preprocessing, the noise free
image is passed to the segmentation phase, where
the image is decomposed into individual characters.
The binarized image is checked forinter line spaces.
If the inter line spaces are detected then the images
is segmented into sets of paragraphs across the
interline gap. The lines in the paragraph are then
scanned for horizontal space intersection with
respect to the background. Histogram is used to
detect the width of the horizontal lines then it is
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 05 Issue: 06 | June -2018 www.irjet.net p-ISSN: 2395-0072
© 2018, IRJET | Impact Factor value: 6.171 | ISO 9001:2008 Certified Journal | Page 1293
scanned vertically for vertical intersection, it is also
used to detect the words. Then the words are
decomposed into characters using character with
computation.
3. Feature Extraction: After segmentation, feature
extraction follows it. Here individualimageglyphsis
considered for image extraction.
4. Classification: Classification follows after feature
extraction as it is done after comparing each
character glyph with its own corresponding glyphs.
These features are then analysed using set of rules
and labelled as belonging to another class. This
classification is generalized so that it only works for
a single font.
Fig 2.2: Flowchart of Text Recognition
2.1.2. Marker Recognition
Marker is an image (it could be anything) that can be
uniquely identifiedandnotbeconfusedwithotherimageasit
will be the gateway through which virtual elements can be
brought into the real world. The application is programmed
to recognize the marker in anarbitrary sceneandgivedetails
of its locations and orientation.
To recognize a marker, the following steps are followed:
1. Input: The image is captured using the onboard
camera or the camera connected to the system. It can be
either captured as live still image or as a video, preferably a
video is used and the inputisthensendforprocessingfurther
down the steps.
2. Grayscale Conversion: The input frame is done in
the intensity plane and then convertedtothegrayscale.Input
of an android phone is given in RGB format and so it is
converted into grayscale.
3. Thresholding: Afterthistheimageisconvertedinto
binary image so that a binary image component analysis can
be performed. Two types of threshold is used for the
conversion:
Adaptive Threshold and Global Threshold.
4. Blob Analysis: we move on from the image plane to a
group of connected components with geometricalproperties
of each one.
2.1.3. Markerless Recognition
This image processing is different from the other recognition
as it uses no predefined marker but it uses a marker created
from the images captured through the camera. Theproposed
methods has two steps:
1. Preprocessing: A 3d model of the environment is
reconstructed using SfM methods using images
taken from multiple viewpoint. The position and
orientation of all the viewpoint is then stored in a
database. After that the coordinates of the
augmented 3d component isdefined in according to
the real viewpoint and then store in the database.
Here, SURF (Speeded up Robust Features) is used
instead of SIFT(Scale Invariant Feature Transform)
for interest point detection.
2. Real time Processing: First the files stored in the
database is imported in this process and usingSURF
the feature point is extracted from the live stream.
After that the 3d componentsarerenderedusingthe
orientation and positionoftherealworldcoordinate
taken from the captured image.
Fig 2.3: Markerless Recognition
3.Framework Used:
3.1. Vuforia (QCAR):
Vuforia is a product of Qualcomm Connected Experiences. It
is a software platform that enables the use of best and most
creative experiences of augmented reality. Vuforia uses
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 05 Issue: 06 | June -2018 www.irjet.net p-ISSN: 2395-0072
© 2018, IRJET | Impact Factor value: 6.171 | ISO 9001:2008 Certified Journal | Page 1294
superior image recognitionalgorithmandwiththesupportof
multiple platforms such as Android, iOS, Unity, Web based
browser. It is much more better than the othertoolkitswhich
only offer a single platform.
3.2. Unity 3D:
Unity3D is an IDE platform for developing games. It can also
developed complexapplication.Itfeaturesahuge3Dcreation
features and is able to support Vuforia so that it can
implement augmented reality application much easier and
faster. It is processed in two languages C# and javascript and
the frontend designed using xaml.
4. SYSTEM ARCHITECTURE
Fig 4.1: Proposed architecture
The proposed architecture is given above. The above
architecture is comprised of multiple modules. Let us see
their roles in the architecture.
GUI: Graphical User Interface, it act as the basic interface
between the user and the programs.
Autofocus programs: It is the component used to focus the
image so that the marker is easily recognized.
Information retrieval from database: Information is
retrieved from the database in according to the marker
recognized.
Converted frame: The Input frame and the output frame is
integrated into a single frame thus making the real world be
augmented by the virtual world.
5. IMPLEMENTATION
Implementations of each role of the component of the
system. The implementation is done in unity and Vuforia.
5.1. Setting up Vuforia
To get started, Vuforia’s Developer Portal is set up, and an
account is created. Once logged in, a developer page is
available. A license key works as an ID to create an
application in Unity using Vuforia. This licensekeyiscreated
in the developer page with the help of a “License Manager”.
Next, the “Target Manager” is used to add Image Target in
Unity. This is done by adding an Image Target database and
filling in the details. Images are added to the newly made
database. Vuforia supports various kinds of targets like a
single flat image, cylindrical, cuboidal, 3D image, etc. Lastly,
this Image target database is downloaded for importinginto
Unity. This is done with the help of the “Download Dataset”.
Fig 5.1: Vuforia Developer Portal
Fig.5.1 shows the Vuforia DeveloperPortal wherethelicense
keys can be managed and the targets can be uploaded.
5.2. Integrating with Unity
Unity is a cross-platform application engine developed by
Unity Technologies which provides a framework for
designing game or app scenes for 2D and 3D. “ARCamera” is
an Augmented Reality camera prefab from Vuforia. Image
Target is added to the scene which is be found in the
“Prefabs” folder. Fig. 5.2 shows the Image Target that is
added into Unity which is obtained by importing the dataset
downloaded from Vuforia.
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 05 Issue: 06 | June -2018 www.irjet.net p-ISSN: 2395-0072
© 2018, IRJET | Impact Factor value: 6.171 | ISO 9001:2008 Certified Journal | Page 1295
Fig 5.2: Image Target
5.3. Recognition
Text recognition, marker recognition or markerless
recognition are all recognized in this phase.Textrecognition
will recognize the text from the words and augment it.
Similarly both marker and markerless are recognized and
augmented over the marker.
5.4. Model Augmentation
Fig 5.3: Augmented Model
After the system is ready with all the recognition processes,
then the models are augmented on top of the markers. It is
found under "Model” folder in Assets. For the model to
appear over Image Target, it is made a child of Image Target.
This is simply done by dragging the model prefab in the
Hierarchy panel. Whenever theImageTargetisdetectedbya
mobile device’s camera, all the children of the target also
appear together.
5.5. Deploying the System
After developing the system, the most important step is to
deploy it on platforms which make it usable for users. Unity
provides the advantage of deploying the system on multiple
platforms such as Android, iOS, etc.. The company name and
the Bundle identifier are changed in thebuildsettingsfor the
system to be deployed. Fig.5.4 shows these changes where
the Bundle identifier and company name are added to the
system deployment build settings.
Fig 5.4: Changing Build Settings
5.6. Running the Application
After deploying the application, a package is made available
for installation. After installing the package, the system will
start, and the camera module will begin. Then, the user has
to only hover the camera over the provided text or image,
and the corresponding graphics will be rendered.
6. CONCLUSION
The system “Implementation of real time image processing
in augmented reality” is implemented where a user can
hover a camera over a page and obtain augmented
information such as 3D Model, videooran explanationabout
that page. It is a system where no typing or searching is
required for getting information. The application provides a
helping hand to the children by facilitating them to learn
new concepts using graphical aid. Since the application can
be deployed on any smartphone, a student can use it as per
his/her convenience. Also, there is no need of any extra
maintenance for this application thus making it an
economical solution. The interactivity aspect of this
application like showing the 3D model allows the user to
understand the concept from every angle. The application
can be expanded further and used for various age groupsfor
not only learning but also helping the users to visualize and
grasp things faster. It provides a unique and interesting way
of learning and understanding of unknown concepts.
REFERENCES
[1] R.T.Azumaetal.,“A survey of augmented
reality,”Presence, vol. 6, no. 4, pp. 355–385, 1997.
[2] R. Azuma, Y. Baillot, R. Behringer, S. Feiner, S. Julier, and
B. MacIntyre, “Recent advances in augmented reality,”
Computer Graphics and Applications, IEEE, vol.21, no.6,
pp.34– 47, 2001.
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056
Volume: 05 Issue: 06 | June -2018 www.irjet.net p-ISSN: 2395-0072
© 2018, IRJET | Impact Factor value: 6.171 | ISO 9001:2008 Certified Journal | Page 1296
[3] I. E. Sutherland and C. A. Mead, “Microelectronics and
computer science,” Scientific American, vol. 237, pp.
210–228, 1977.
[4] T. P. Caudell and D. W. Mizell, “Augmented reality: An
application of heads-up display technology to manual
manufacturing processes,” in System Sciences, 1992.
Proceedings of the Twenty-Fifth Hawaii International
Conference on, vol. 2. IEEE, 1992, pp. 659–669.
[5] A. L. Janin, D. W. Mizell, and T. P. Caudell, “Calibration of
head-mounted displays for augmented reality
applications,” in Virtual Reality Annual International
Symposium, 1993., 1993 IEEE. IEEE, 1993,pp.246–255.
[6] F. P. Brooks Jr, “The computer scientist as toolsmith ii,”
Communications of the ACM, vol. 39, no. 3, pp. 61–68,
1996.
[7] M.deS´and E.Churchill, “Mobile augmented reality:
exploring design and prototyping techniques,” in
Proceedings of the 14th international conference on
Human-computer interaction with mobile devices and
services. ACM, 2012, pp. 221–230.
[8] O. Bimber, R. Raskar, and M. Inami, Spatial augmented
reality. AK Peters Wellesley, 2005.
[9] F. Zhou, H. B.-L. Duh, and M. Billinghurst, “Trends in
augmented reality tracking, interaction and display: A
review of ten years of ismar,” in Proceedings of the 7th
IEEE/ACM International Symposium on Mixed and
Augmented Reality. IEEE Computer Society, 2008, pp.
193–202.
[10] A.Shatte, J. Holdsworth, and I. Lee, “Mobile augmented
reality based context-aware library management
system,” Expert Systems withApplications,vol.41,no.5,
pp. 2174–2185, 2014.
[11] W. Piekarski, B. Gunther, and B. Thomas, “Integrating
virtual and augmented realities in an outdoor
application,” in Augmented Reality, 1999.(IWAR’99)
Proceedings.2ndIEEEandACMInternational Workshop
on. IEEE, 1999, pp. 45–54.
[12] S. K. Ong, A. Y. Nee, and S. K. Ong, Virtual Reality and
Augmented Reality Applications in Manufacturing.
Springer Verlag, 2004.
[13] D. Van Krevelen and R. Poelman, “A survey of
augmented reality technologies, applications and
limitations,” International Journal of Virtual Reality,vol.
9, no. 2, p. 1, 2010.
[14] P. Milgram and F. Kishino, “A taxonomy of mixed reality
visual displays,” IEICE TRANSACTIONS on Information
and Systems, vol. 77, no. 12, pp. 1321–1329, 1994.
[15] S. M. Land and H. T. Zimmerman, “Synthesizing
perspectives on augmentedrealityandmobilelearning,”
TechTrends, vol. 58, no. 1, p. 3, 2014.
[16] H.-Y. Chang, H.-K. Wu, and Y.-S. Hsu, “Integrating a
mobile augmented reality activity to contextualize
student learning of a socioscientific issue,” British
Journal of Educational Technology, vol. 44, no. 3, pp.
E95–E99, 2013.
[17] L. Alem and W. T. Huang, Recent trends of mobile
collaborative augmented reality systems. Springer,
2011..

More Related Content

PDF
IRJET - 3D Virtual Dressing Room Application
PDF
Markerless motion capture for 3D human model animation using depth camera
PDF
Surveillance using Video Analytics
PDF
DISTRIBUTED AND SYNCHRONISED VR MEETING USING CLOUD COMPUTING: Availability a...
PDF
Availability of Mobile Augmented Reality System for Urban Landscape Simulation
PDF
IRJET - Floor Cleaning Robot with Vision
PDF
SOAR: SENSOR ORIENTED MOBILE AUGMENTED REALITY FOR URBAN LANDSCAPE ASSESSMENT
PDF
A STUDY OF VARIATION OF NORMAL OF POLY-GONS CREATED BY POINT CLOUD DATA FOR A...
IRJET - 3D Virtual Dressing Room Application
Markerless motion capture for 3D human model animation using depth camera
Surveillance using Video Analytics
DISTRIBUTED AND SYNCHRONISED VR MEETING USING CLOUD COMPUTING: Availability a...
Availability of Mobile Augmented Reality System for Urban Landscape Simulation
IRJET - Floor Cleaning Robot with Vision
SOAR: SENSOR ORIENTED MOBILE AUGMENTED REALITY FOR URBAN LANDSCAPE ASSESSMENT
A STUDY OF VARIATION OF NORMAL OF POLY-GONS CREATED BY POINT CLOUD DATA FOR A...

What's hot (20)

PDF
IRJET- Convenience Improvement for Graphical Interface using Gesture Dete...
PDF
GOAR: GIS Oriented Mobile Augmented Reality for Urban Landscape Assessment
PDF
IRJET - NETRA: Android Application for Visually Challenged People to Dete...
PDF
IRJET- Survey Paper on Vision based Hand Gesture Recognition
PDF
IRJET- Finger Gesture Recognition using Laser Line Generator and Camera
PDF
Human Action Recognition using Contour History Images and Neural Networks Cla...
PDF
IRJET- Automation of Object Segregation
PDF
IRJET- Full Body Motion Detection and Surveillance System Application
PDF
CAAD FUTURES 2015: Development of High-definition Virtual Reality for Histo...
PDF
Human pose detection using machine learning by Grandel
PDF
IRJET- Real Time Video Object Tracking using Motion Estimation
PDF
IRJET - Gesture Controlled Home Automation using CNN
PPTX
concealed weapon detection using digital image processing
PDF
A Smart Target Detection System using Fuzzy Logic and Background Subtraction
PDF
Web cam sensing using sdk tool
PDF
IRJET- Homomorphic Image Encryption
PDF
Industrial Microcontroller Based Neural Network Controlled Autonomous Vehicle
PDF
Law cost portable machine vision system
PDF
IRJET - An Embedded Approach for Design and Development of the Mini CNC C...
PDF
IRJET- A Deep Learning based Approach for Automatic Detection of Bike Rid...
IRJET- Convenience Improvement for Graphical Interface using Gesture Dete...
GOAR: GIS Oriented Mobile Augmented Reality for Urban Landscape Assessment
IRJET - NETRA: Android Application for Visually Challenged People to Dete...
IRJET- Survey Paper on Vision based Hand Gesture Recognition
IRJET- Finger Gesture Recognition using Laser Line Generator and Camera
Human Action Recognition using Contour History Images and Neural Networks Cla...
IRJET- Automation of Object Segregation
IRJET- Full Body Motion Detection and Surveillance System Application
CAAD FUTURES 2015: Development of High-definition Virtual Reality for Histo...
Human pose detection using machine learning by Grandel
IRJET- Real Time Video Object Tracking using Motion Estimation
IRJET - Gesture Controlled Home Automation using CNN
concealed weapon detection using digital image processing
A Smart Target Detection System using Fuzzy Logic and Background Subtraction
Web cam sensing using sdk tool
IRJET- Homomorphic Image Encryption
Industrial Microcontroller Based Neural Network Controlled Autonomous Vehicle
Law cost portable machine vision system
IRJET - An Embedded Approach for Design and Development of the Mini CNC C...
IRJET- A Deep Learning based Approach for Automatic Detection of Bike Rid...
Ad

Similar to IRJET-Implementation of Image Processing using Augmented Reality (20)

PDF
IRJET- Intrusion Detection through Image Processing and Getting Notified ...
PDF
Photo Editing And Sharing Web Application With AI- Assisted Features
PDF
Motion capture for Animation
PDF
2D to 3D dynamic modeling of architectural plans in Augmented Reality
PDF
IRJET- Augmented Reality based Building Modelling
PDF
IRJET- Proposed Design for 3D Map Generation using UAV
PDF
IRJET- Optimization of Surveillance Camera for Low Cost Storage Device
PDF
IRJET- Pick and Place Robot for Color based Sorting
PDF
Real Time Moving Object Detection for Day-Night Surveillance using AI
PDF
IRJET- Augmented Reality based Interior Decorator System
PDF
Controlling Electrical Appliances Using IOT and AR
PDF
IRJET- Tracking and Recognition of Multiple Human and Non-Human Activites
PDF
IRJET- Object Detection and Recognition for Blind Assistance
PDF
IRJET- HCI System with Hand Gesture
PDF
Iaetsd a low power and high throughput re-configurable bip for multipurpose a...
PDF
Interior Designing Mobile Application based on Markerless Augmented Reality (AR)
PDF
iVideo Editor with Background Remover and Image Inpainting
PDF
Virtual Automation using Mixed Reality and Leap Motion Control
PDF
IRJET- Note to Coin Converter
PDF
IRJET- VISITX: Face Recognition Visitor Management System
IRJET- Intrusion Detection through Image Processing and Getting Notified ...
Photo Editing And Sharing Web Application With AI- Assisted Features
Motion capture for Animation
2D to 3D dynamic modeling of architectural plans in Augmented Reality
IRJET- Augmented Reality based Building Modelling
IRJET- Proposed Design for 3D Map Generation using UAV
IRJET- Optimization of Surveillance Camera for Low Cost Storage Device
IRJET- Pick and Place Robot for Color based Sorting
Real Time Moving Object Detection for Day-Night Surveillance using AI
IRJET- Augmented Reality based Interior Decorator System
Controlling Electrical Appliances Using IOT and AR
IRJET- Tracking and Recognition of Multiple Human and Non-Human Activites
IRJET- Object Detection and Recognition for Blind Assistance
IRJET- HCI System with Hand Gesture
Iaetsd a low power and high throughput re-configurable bip for multipurpose a...
Interior Designing Mobile Application based on Markerless Augmented Reality (AR)
iVideo Editor with Background Remover and Image Inpainting
Virtual Automation using Mixed Reality and Leap Motion Control
IRJET- Note to Coin Converter
IRJET- VISITX: Face Recognition Visitor Management System
Ad

More from IRJET Journal (20)

PDF
Enhanced heart disease prediction using SKNDGR ensemble Machine Learning Model
PDF
Utilizing Biomedical Waste for Sustainable Brick Manufacturing: A Novel Appro...
PDF
Kiona – A Smart Society Automation Project
PDF
DESIGN AND DEVELOPMENT OF BATTERY THERMAL MANAGEMENT SYSTEM USING PHASE CHANG...
PDF
Invest in Innovation: Empowering Ideas through Blockchain Based Crowdfunding
PDF
SPACE WATCH YOUR REAL-TIME SPACE INFORMATION HUB
PDF
A Review on Influence of Fluid Viscous Damper on The Behaviour of Multi-store...
PDF
Wireless Arduino Control via Mobile: Eliminating the Need for a Dedicated Wir...
PDF
Explainable AI(XAI) using LIME and Disease Detection in Mango Leaf by Transfe...
PDF
BRAIN TUMOUR DETECTION AND CLASSIFICATION
PDF
The Project Manager as an ambassador of the contract. The case of NEC4 ECC co...
PDF
"Enhanced Heat Transfer Performance in Shell and Tube Heat Exchangers: A CFD ...
PDF
Advancements in CFD Analysis of Shell and Tube Heat Exchangers with Nanofluid...
PDF
Breast Cancer Detection using Computer Vision
PDF
Auto-Charging E-Vehicle with its battery Management.
PDF
Analysis of high energy charge particle in the Heliosphere
PDF
A Novel System for Recommending Agricultural Crops Using Machine Learning App...
PDF
Auto-Charging E-Vehicle with its battery Management.
PDF
Analysis of high energy charge particle in the Heliosphere
PDF
Wireless Arduino Control via Mobile: Eliminating the Need for a Dedicated Wir...
Enhanced heart disease prediction using SKNDGR ensemble Machine Learning Model
Utilizing Biomedical Waste for Sustainable Brick Manufacturing: A Novel Appro...
Kiona – A Smart Society Automation Project
DESIGN AND DEVELOPMENT OF BATTERY THERMAL MANAGEMENT SYSTEM USING PHASE CHANG...
Invest in Innovation: Empowering Ideas through Blockchain Based Crowdfunding
SPACE WATCH YOUR REAL-TIME SPACE INFORMATION HUB
A Review on Influence of Fluid Viscous Damper on The Behaviour of Multi-store...
Wireless Arduino Control via Mobile: Eliminating the Need for a Dedicated Wir...
Explainable AI(XAI) using LIME and Disease Detection in Mango Leaf by Transfe...
BRAIN TUMOUR DETECTION AND CLASSIFICATION
The Project Manager as an ambassador of the contract. The case of NEC4 ECC co...
"Enhanced Heat Transfer Performance in Shell and Tube Heat Exchangers: A CFD ...
Advancements in CFD Analysis of Shell and Tube Heat Exchangers with Nanofluid...
Breast Cancer Detection using Computer Vision
Auto-Charging E-Vehicle with its battery Management.
Analysis of high energy charge particle in the Heliosphere
A Novel System for Recommending Agricultural Crops Using Machine Learning App...
Auto-Charging E-Vehicle with its battery Management.
Analysis of high energy charge particle in the Heliosphere
Wireless Arduino Control via Mobile: Eliminating the Need for a Dedicated Wir...

Recently uploaded (20)

PDF
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
PPTX
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
PPTX
Geodesy 1.pptx...............................................
PPTX
Foundation to blockchain - A guide to Blockchain Tech
PPTX
UNIT-1 - COAL BASED THERMAL POWER PLANTS
PPTX
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
PPTX
Sustainable Sites - Green Building Construction
PPTX
OOP with Java - Java Introduction (Basics)
PPTX
Safety Seminar civil to be ensured for safe working.
PPTX
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
PDF
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
DOCX
573137875-Attendance-Management-System-original
PPT
Mechanical Engineering MATERIALS Selection
PPTX
Current and future trends in Computer Vision.pptx
PDF
composite construction of structures.pdf
PDF
Well-logging-methods_new................
PDF
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
PDF
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
PPTX
bas. eng. economics group 4 presentation 1.pptx
PDF
Embodied AI: Ushering in the Next Era of Intelligent Systems
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
Geodesy 1.pptx...............................................
Foundation to blockchain - A guide to Blockchain Tech
UNIT-1 - COAL BASED THERMAL POWER PLANTS
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
Sustainable Sites - Green Building Construction
OOP with Java - Java Introduction (Basics)
Safety Seminar civil to be ensured for safe working.
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
573137875-Attendance-Management-System-original
Mechanical Engineering MATERIALS Selection
Current and future trends in Computer Vision.pptx
composite construction of structures.pdf
Well-logging-methods_new................
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
bas. eng. economics group 4 presentation 1.pptx
Embodied AI: Ushering in the Next Era of Intelligent Systems

IRJET-Implementation of Image Processing using Augmented Reality

  • 1. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 05 Issue: 06 | June -2018 www.irjet.net p-ISSN: 2395-0072 © 2018, IRJET | Impact Factor value: 6.171 | ISO 9001:2008 Certified Journal | Page 1292 Implementation of Image processing using augmented reality Konjengbam Jackichand Singh 1, L.P.Saikia2 1 MTech Computer Sc & Engg , Assam Downtown University, India 2 Professor, Computer Sc& Engg , Assam downtown University, India ---------------------------------------------------------------------***--------------------------------------------------------------------- Abstract - Augmented reality is the new paradigm shift in this world of ever changing technologicalmarvels. Augmented reality is the integration of virtual elements in real world environment. It is a new perspective of looking at information in conjunction with the real world. In this paper, we proposed to develop a simple mobile educational application using augmented reality resources. The proposed system will use different image processing liketextrecognition, markerbased, marker less, cloud based, model tracking. Key Words: Augmented Reality, Educational system, Image Processing, MAR (Mobile Augmented Reality) 1. INTRODUCTION The new age of technological development is further changing the education environment from the traditional classroom scene with blackboard and chalk to smart education appliances such as projectors, teaching aids, tablets, e-book reader offering vast opportunities of information and various challenges to the student and the teacher. Following along this, researches are conducted to introduce a new way of teaching and learning experiences. Among these is the possibilities of bringing augmented reality to pedagogical applications, providing user with visualization, interaction and experimentation of the educational objects. Through Augmented reality, it is possible to further enhance the learning experiences by creating a more enhanced level of user engagement through the visualization of the 3D objects interposed to real ones. As augmented reality is recognizedintheeducational aspect, researcher are finding new strategies to improve the learning experience. Considering this, Nincarean et al proposed that augmented reality efficiency be improved by combining with other technologies like the mobile device hence incorporating a lot of useful features such as portability, social connection, connectivity and individuality thus a new concept is formed called Mobile Augmented Reality(MAR). 2. Augmented Reality Augmented reality is the integration of the virtual world and the real world in which the virtual element is present along with the real world. To achieve a more deeper understanding of the contrast between augmented reality and virtual reality. Milgram and kishino implemented a continuum called the Milgram Virtual Continuum. They defined it as the real world “augmented” by means of virtual elements. Virtual Reality enables the user to fully immersed in the three-dimensional virtual environment while augmented reality aims to combine the elements of virtual and real elements. Fig 2.1: Milgram’s Virtual Continuum As computer increases in processing and its size becomes smaller, mobile devices or wearables becomes increasingly feasible and people are able to access the online world at any place at any time. This new flexibility of the mobile devices allows a new breed of applications that can change the paradigmoftechnology.Augmentedrealityapplicationadded on the mobile devices provides portability, social connectivity, individuality as additional features. 2.1. Image Processing Various types of image processing is used in augmented reality for a lot of different input sources. According to the input sources various output can be generated. 2.1.1. Text Recognition Text Recognition otherwise referred to as OCR(Optical Character Reader). It involves asystemcapableoftranslating an image of text into machine editable text or into standard encoding scheme representing each one of them. Its real application is in bank, offices, post office, etc where image of documents needs to be converted into an editabledocument. The Steps are: 1. Preprocessing: It is the first step in processing of the scanned image. The scanned image is then checked for noise, skew and slant. After that the scanned image is converted into grayscale and then turn into binary. 2. Segmentation: After preprocessing, the noise free image is passed to the segmentation phase, where the image is decomposed into individual characters. The binarized image is checked forinter line spaces. If the inter line spaces are detected then the images is segmented into sets of paragraphs across the interline gap. The lines in the paragraph are then scanned for horizontal space intersection with respect to the background. Histogram is used to detect the width of the horizontal lines then it is
  • 2. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 05 Issue: 06 | June -2018 www.irjet.net p-ISSN: 2395-0072 © 2018, IRJET | Impact Factor value: 6.171 | ISO 9001:2008 Certified Journal | Page 1293 scanned vertically for vertical intersection, it is also used to detect the words. Then the words are decomposed into characters using character with computation. 3. Feature Extraction: After segmentation, feature extraction follows it. Here individualimageglyphsis considered for image extraction. 4. Classification: Classification follows after feature extraction as it is done after comparing each character glyph with its own corresponding glyphs. These features are then analysed using set of rules and labelled as belonging to another class. This classification is generalized so that it only works for a single font. Fig 2.2: Flowchart of Text Recognition 2.1.2. Marker Recognition Marker is an image (it could be anything) that can be uniquely identifiedandnotbeconfusedwithotherimageasit will be the gateway through which virtual elements can be brought into the real world. The application is programmed to recognize the marker in anarbitrary sceneandgivedetails of its locations and orientation. To recognize a marker, the following steps are followed: 1. Input: The image is captured using the onboard camera or the camera connected to the system. It can be either captured as live still image or as a video, preferably a video is used and the inputisthensendforprocessingfurther down the steps. 2. Grayscale Conversion: The input frame is done in the intensity plane and then convertedtothegrayscale.Input of an android phone is given in RGB format and so it is converted into grayscale. 3. Thresholding: Afterthistheimageisconvertedinto binary image so that a binary image component analysis can be performed. Two types of threshold is used for the conversion: Adaptive Threshold and Global Threshold. 4. Blob Analysis: we move on from the image plane to a group of connected components with geometricalproperties of each one. 2.1.3. Markerless Recognition This image processing is different from the other recognition as it uses no predefined marker but it uses a marker created from the images captured through the camera. Theproposed methods has two steps: 1. Preprocessing: A 3d model of the environment is reconstructed using SfM methods using images taken from multiple viewpoint. The position and orientation of all the viewpoint is then stored in a database. After that the coordinates of the augmented 3d component isdefined in according to the real viewpoint and then store in the database. Here, SURF (Speeded up Robust Features) is used instead of SIFT(Scale Invariant Feature Transform) for interest point detection. 2. Real time Processing: First the files stored in the database is imported in this process and usingSURF the feature point is extracted from the live stream. After that the 3d componentsarerenderedusingthe orientation and positionoftherealworldcoordinate taken from the captured image. Fig 2.3: Markerless Recognition 3.Framework Used: 3.1. Vuforia (QCAR): Vuforia is a product of Qualcomm Connected Experiences. It is a software platform that enables the use of best and most creative experiences of augmented reality. Vuforia uses
  • 3. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 05 Issue: 06 | June -2018 www.irjet.net p-ISSN: 2395-0072 © 2018, IRJET | Impact Factor value: 6.171 | ISO 9001:2008 Certified Journal | Page 1294 superior image recognitionalgorithmandwiththesupportof multiple platforms such as Android, iOS, Unity, Web based browser. It is much more better than the othertoolkitswhich only offer a single platform. 3.2. Unity 3D: Unity3D is an IDE platform for developing games. It can also developed complexapplication.Itfeaturesahuge3Dcreation features and is able to support Vuforia so that it can implement augmented reality application much easier and faster. It is processed in two languages C# and javascript and the frontend designed using xaml. 4. SYSTEM ARCHITECTURE Fig 4.1: Proposed architecture The proposed architecture is given above. The above architecture is comprised of multiple modules. Let us see their roles in the architecture. GUI: Graphical User Interface, it act as the basic interface between the user and the programs. Autofocus programs: It is the component used to focus the image so that the marker is easily recognized. Information retrieval from database: Information is retrieved from the database in according to the marker recognized. Converted frame: The Input frame and the output frame is integrated into a single frame thus making the real world be augmented by the virtual world. 5. IMPLEMENTATION Implementations of each role of the component of the system. The implementation is done in unity and Vuforia. 5.1. Setting up Vuforia To get started, Vuforia’s Developer Portal is set up, and an account is created. Once logged in, a developer page is available. A license key works as an ID to create an application in Unity using Vuforia. This licensekeyiscreated in the developer page with the help of a “License Manager”. Next, the “Target Manager” is used to add Image Target in Unity. This is done by adding an Image Target database and filling in the details. Images are added to the newly made database. Vuforia supports various kinds of targets like a single flat image, cylindrical, cuboidal, 3D image, etc. Lastly, this Image target database is downloaded for importinginto Unity. This is done with the help of the “Download Dataset”. Fig 5.1: Vuforia Developer Portal Fig.5.1 shows the Vuforia DeveloperPortal wherethelicense keys can be managed and the targets can be uploaded. 5.2. Integrating with Unity Unity is a cross-platform application engine developed by Unity Technologies which provides a framework for designing game or app scenes for 2D and 3D. “ARCamera” is an Augmented Reality camera prefab from Vuforia. Image Target is added to the scene which is be found in the “Prefabs” folder. Fig. 5.2 shows the Image Target that is added into Unity which is obtained by importing the dataset downloaded from Vuforia.
  • 4. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 05 Issue: 06 | June -2018 www.irjet.net p-ISSN: 2395-0072 © 2018, IRJET | Impact Factor value: 6.171 | ISO 9001:2008 Certified Journal | Page 1295 Fig 5.2: Image Target 5.3. Recognition Text recognition, marker recognition or markerless recognition are all recognized in this phase.Textrecognition will recognize the text from the words and augment it. Similarly both marker and markerless are recognized and augmented over the marker. 5.4. Model Augmentation Fig 5.3: Augmented Model After the system is ready with all the recognition processes, then the models are augmented on top of the markers. It is found under "Model” folder in Assets. For the model to appear over Image Target, it is made a child of Image Target. This is simply done by dragging the model prefab in the Hierarchy panel. Whenever theImageTargetisdetectedbya mobile device’s camera, all the children of the target also appear together. 5.5. Deploying the System After developing the system, the most important step is to deploy it on platforms which make it usable for users. Unity provides the advantage of deploying the system on multiple platforms such as Android, iOS, etc.. The company name and the Bundle identifier are changed in thebuildsettingsfor the system to be deployed. Fig.5.4 shows these changes where the Bundle identifier and company name are added to the system deployment build settings. Fig 5.4: Changing Build Settings 5.6. Running the Application After deploying the application, a package is made available for installation. After installing the package, the system will start, and the camera module will begin. Then, the user has to only hover the camera over the provided text or image, and the corresponding graphics will be rendered. 6. CONCLUSION The system “Implementation of real time image processing in augmented reality” is implemented where a user can hover a camera over a page and obtain augmented information such as 3D Model, videooran explanationabout that page. It is a system where no typing or searching is required for getting information. The application provides a helping hand to the children by facilitating them to learn new concepts using graphical aid. Since the application can be deployed on any smartphone, a student can use it as per his/her convenience. Also, there is no need of any extra maintenance for this application thus making it an economical solution. The interactivity aspect of this application like showing the 3D model allows the user to understand the concept from every angle. The application can be expanded further and used for various age groupsfor not only learning but also helping the users to visualize and grasp things faster. It provides a unique and interesting way of learning and understanding of unknown concepts. REFERENCES [1] R.T.Azumaetal.,“A survey of augmented reality,”Presence, vol. 6, no. 4, pp. 355–385, 1997. [2] R. Azuma, Y. Baillot, R. Behringer, S. Feiner, S. Julier, and B. MacIntyre, “Recent advances in augmented reality,” Computer Graphics and Applications, IEEE, vol.21, no.6, pp.34– 47, 2001.
  • 5. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395-0056 Volume: 05 Issue: 06 | June -2018 www.irjet.net p-ISSN: 2395-0072 © 2018, IRJET | Impact Factor value: 6.171 | ISO 9001:2008 Certified Journal | Page 1296 [3] I. E. Sutherland and C. A. Mead, “Microelectronics and computer science,” Scientific American, vol. 237, pp. 210–228, 1977. [4] T. P. Caudell and D. W. Mizell, “Augmented reality: An application of heads-up display technology to manual manufacturing processes,” in System Sciences, 1992. Proceedings of the Twenty-Fifth Hawaii International Conference on, vol. 2. IEEE, 1992, pp. 659–669. [5] A. L. Janin, D. W. Mizell, and T. P. Caudell, “Calibration of head-mounted displays for augmented reality applications,” in Virtual Reality Annual International Symposium, 1993., 1993 IEEE. IEEE, 1993,pp.246–255. [6] F. P. Brooks Jr, “The computer scientist as toolsmith ii,” Communications of the ACM, vol. 39, no. 3, pp. 61–68, 1996. [7] M.deS´and E.Churchill, “Mobile augmented reality: exploring design and prototyping techniques,” in Proceedings of the 14th international conference on Human-computer interaction with mobile devices and services. ACM, 2012, pp. 221–230. [8] O. Bimber, R. Raskar, and M. Inami, Spatial augmented reality. AK Peters Wellesley, 2005. [9] F. Zhou, H. B.-L. Duh, and M. Billinghurst, “Trends in augmented reality tracking, interaction and display: A review of ten years of ismar,” in Proceedings of the 7th IEEE/ACM International Symposium on Mixed and Augmented Reality. IEEE Computer Society, 2008, pp. 193–202. [10] A.Shatte, J. Holdsworth, and I. Lee, “Mobile augmented reality based context-aware library management system,” Expert Systems withApplications,vol.41,no.5, pp. 2174–2185, 2014. [11] W. Piekarski, B. Gunther, and B. Thomas, “Integrating virtual and augmented realities in an outdoor application,” in Augmented Reality, 1999.(IWAR’99) Proceedings.2ndIEEEandACMInternational Workshop on. IEEE, 1999, pp. 45–54. [12] S. K. Ong, A. Y. Nee, and S. K. Ong, Virtual Reality and Augmented Reality Applications in Manufacturing. Springer Verlag, 2004. [13] D. Van Krevelen and R. Poelman, “A survey of augmented reality technologies, applications and limitations,” International Journal of Virtual Reality,vol. 9, no. 2, p. 1, 2010. [14] P. Milgram and F. Kishino, “A taxonomy of mixed reality visual displays,” IEICE TRANSACTIONS on Information and Systems, vol. 77, no. 12, pp. 1321–1329, 1994. [15] S. M. Land and H. T. Zimmerman, “Synthesizing perspectives on augmentedrealityandmobilelearning,” TechTrends, vol. 58, no. 1, p. 3, 2014. [16] H.-Y. Chang, H.-K. Wu, and Y.-S. Hsu, “Integrating a mobile augmented reality activity to contextualize student learning of a socioscientific issue,” British Journal of Educational Technology, vol. 44, no. 3, pp. E95–E99, 2013. [17] L. Alem and W. T. Huang, Recent trends of mobile collaborative augmented reality systems. Springer, 2011..