SlideShare a Scribd company logo
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395 -0056
Volume: 04 Issue: 03 | Mar -2017 www.irjet.net p-ISSN: 2395-0072
© 2017, IRJET | Impact Factor value: 5.181 | ISO 9001:2008 Certified Journal | Page 1439
Raspberry Pi Augmentation: A cost effective solution to Google Glass
Nikhil Paonikar1, Samuel Kumar2, Manoj Bramhe3
1,2UG student, Department of Information Technology,
St. Vincent Pallotti College of Engineering and Technology, Maharashtra, India
3Associate Professor and H.O.D., Department of Information Technology,
St. Vincent Pallotti College of Engineering and Technology, Maharashtra, India
---------------------------------------------------------------------***---------------------------------------------------------------------
Abstract - In this paper, we present the concept of using a
Raspberry Pi as the main computer. This human
enhancement is voice controlled, uses a camera module to
interact with the outside world and process the retrieved
information for real-time use by the wearer. It uses this
augmentation as a brain-computer interface to connect to
the web, extract and process information from the physical
world. As a forward stride in wearable technology, this
device intends to employ the disciplines of speech synthesis,
cloud computing and image processing in order to serve the
user as a primary portable computer that is voice-enabled
and always ready for interaction.
Key Words: Wearable Technology, Augmentation,
Wearable Device, Human Enhancement, Brain-
Computer Interface, Human-computer interaction.
1. INTRODUCTION
The terms “wearable technology” refers to electronic
devices, particularly computers that are incorporated into
items of clothing and more eminent in accessories which
can comfortably be worn on the body. Wearable devices
are capable of performing many of the same computing
tasks as smartphones, tablets and laptops; however, in
some cases, wearable technology are proficient enough to
outperform these portable devices entirely. Wearable tech
tend to be better engineered and often employ cutting
edge technologies that are rarely found in hand-held
technology on the market today. These relatively new
wearable devices can provide sensory and scanning
features not usually seen in mobile and laptop devices,
such as biofeedback and tracking of physiological function.
The wearable is being developed to make it function as a
human enhancement. It is engineered to acquire data from
the surrounding environment of the user through a
camera integrated within the head-mounted augmentation
and processing that data in real-time via the Raspberry Pi.
The information obtained by processing this data will be
then projected onto a semi-transparent glass which would
be suspended to the anterior of the augmentation. This
augmentation also has an artificial intelligence that
responds to the user’s voice and performs the instructed
tasks. The system has the capability of detecting faces and
motion to some degree.
2. LITERATURE SURVEY
According to Kevin Warwick in [1], what constitutes a
Brain-Computer Interface (BCI) can be extremely broad. A
standard keyboard could be so regarded. It is clear
however, that virtual reality systems, e.g. glasses
containing a miniature computer screen for a remote visual
experience (Mann 1997), are felt by some researchers to fit
this category. Certain body conditions, such as stress or
alertness, can be indeed be monitored in this way.
In their paper on a wearable personal cloud [2], authors
Hasan and Khan describe personal terminal devices as
interactive devices capable of wireless communications
like WiFi, Bluetooth, ZigBee. Such devices include, but are
not limited to, smartphones, smartwatches, tablet
computers, smart-glasses, health monitors, and other IoT
devices. They even posit that this kind of wearable tech
utilizes lower hardware specifications to function as
resource constraint devices, in terms of computational
power, battery, memory and storage.
Computer vision [3] is the technology in which
machines are able to interpret/extract necessary
information from an image. Computer vision technology
includes various fields like image processing, image
analysis and machine vision. It includes certain aspect of
artificial intelligence techniques like pattern recognition.
The machines which implement computer vision
techniques require image sensors which detect
electromagnetic radiation which are usually in the form of
ultraviolet rays or light rays.
In his paper [4], Ronald T. Azuma describes a unique
characteristic of augmented reality; besides adding objects
to a real environment, AR also has the potential to remove
them. Current work has focused on adding virtual objects
to a real environment. However, graphic overlays might
also be used to remove or hide parts of the real
environment from a user. For example, to remove a desk in
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395 -0056
Volume: 04 Issue: 03 | Mar -2017 www.irjet.net p-ISSN: 2395-0072
© 2017, IRJET | Impact Factor value: 5.181 | ISO 9001:2008 Certified Journal | Page 1440
the real environment, draw a representation of the real
walls and floors behind the desk and "paint" that over the
real desk, effectively removing it from the user's sight. This
has been done in feature films. Doing this interactively in
an AR system will be much harder, but this removal may
not need to be photorealistic to be effective.
Furthermore, in [5], Durlach tells us that augmented
reality might apply to all senses, not just sight. So far,
researchers have focused on blending real and virtual
images and graphics. However, AR could be extended to
include sound. The user would wear headphones equipped
with microphones on the outside. The headphones would
add synthetic, directional 3-D sound, while the external
microphones would detect incoming sounds from the
environment. This would give the system a chance to mask
or cover up selected real sounds from the environment by
generating a masking signal that exactly cancelled the
incoming real sound. While this would not be easy to do, it
might be possible as discerned in [6]. Another example is
haptics; the science of applying touch (tactile) sensation
and control to interaction with computer applications.
Gloves with devices that provide tactile feedback might
augment real forces in the environment. E.g., a user might
run his hand over the surface of a real desk. Simulating
such a hard surface virtually is fairly difficult, but it is easy
to do in reality. Then the tactile effectors in the glove can
augment the feel of the desk, perhaps making it feel rough
in certain spots. This capability might be useful in some
applications, such as providing an additional cue that a
virtual object is at a particular location on a real desk.
In [7], Sin and Zaman used the glasses metaphor to
present virtual heavenly bodies on AR markers which they
can manipulate in front of them. Students wore a head-
mounted display so that both of their hands would be free
to handle the markers containing the virtual solar system.
A similar study in [8], by Shelton and Hedley had
argued that this visualization is advantageous because
students can more easily understand concepts such as day
and night when they can test for themselves what happens
when one side of the earth is occluded. The Raspberry Pi is
a credit card-sized computer that plugs into your TV and a
keyboard. It is a capable little computer which can be used
in electronics projects, and for many of the things that your
desktop PC does, like spreadsheets, word processing,
browsing the internet, and playing games. It also plays
high-definition video. It is designed to cost around Rs.
2,600 for the latest model [9].
According to Reaz, et al. in [10], artificial intelligence
has played a crucial part in the design and implementation
of future houses. Early research focused on the control of
home appliances but current trends are moving into a
creation of self-thinking home. In the recent years many
research projects were performed utilizing artificial
intelligence tools and techniques. Also, research projects
employing multi-agent system (MAS); is a system
composed of several agents, capable of speedy mutual
interaction between them, action prediction, artificial
neural network, fuzzy logic and reinforcement learning. It
is found that the combination of tools and techniques are
crucial for successful implementation. A platform for future
relative studies between different algorithms, architectures
which serves as a reference point for developing more
cutting edge smart home technologies has been theorized.
According to Sherri Stevens in her research on the leading
voice controlled UIs in [11], a major issue in the marketing
research industry is the fact that the majority of surveys
are not well suited to a mobile device. In many cases, it is
difficult to see all of the answer choices on a small screen
device, as answer choices on a second page are less likely
to be selected. Using the text to voice feature on mobile
phones may be a solution to this problem. We tested
several long answer lists – resulting in sensible data. This
area of research should be explored further as a way to
obtain reliable research data from respondents using a
small screen device when long answer lists are
unavoidable.
3. SYSTEM DESCRIPTION
3.1 Raspberry Pi
Raspberry Pi (4× ARM Cortex-A53, 1.2GHz) is used as the
primary computer. Raspberry pi is a credit card sized
computer that runs a Linux distribution. It is used to
connect the camera module and perform processing for
object recognition. It is also used to set up the hands-free
prototype of Alexa Voice Service by registering as a
developer and creating a client token.
3.2 Raspberry Pi Camera NoIR v2
The infrared Camera Module v2 (Pi NoIR) has a Sony
IMX219 8-megapixel sensor (compared to the 5-megapixel
OmniVision OV5647 sensor of the original camera). The Pi
NoIR offers everything the regular Camera Module offers,
with one difference: it does not employ an infrared filter.
(NoIR = No Infrared.) This means that pictures taken in
daylight look decidedly curious, but gives us the ability to
see in the dark with infrared lighting.
3.3 Augmented Reality Display
The augmented reality display is composed of a LCD
display screen that is projected on a semi-transparent
mirror through a Fresnel lens. The LCD display is
connected to the Raspberry Pi via GPIO pins. The
“/boot/config.txt “ file is modified to display the contents
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395 -0056
Volume: 04 Issue: 03 | Mar -2017 www.irjet.net p-ISSN: 2395-0072
© 2017, IRJET | Impact Factor value: 5.181 | ISO 9001:2008 Certified Journal | Page 1441
of -+the screen as a mirror image; so as to project the
original content onto the semi-transparent mirror
seamlessly.
3.4 Intelligent User Interface
Verbal commands are interpreted by a voice processing
module. Amazon’s Alexa Voice Service (AVS) is used to
take user input and produce results in the form of auditory
or visual information. AVS is Amazon’s facility to provide
voice control to any device or electronic product.
3.5 Head-mount
The head-mount is a mechanical framework on which all
the components will be physically installed. It functions as
a helmet of sorts which can be strapped on to the user’s
head. All hardware components are fit over and around
the head-mount to maintain coherence of the
augmentation.
4. SYSTEM ARCHITECTURE
The primary processing is done on the Raspberry Pi which
also serves as the core module to the rest of the modules. A
screen is connected to the Pi via GPIO pins, or secondarily
via a VNC server connection.
The contents of the screen are displayed onto the semi-
transparent mirror which serves as a real-time heads-up
display for the user.
During boot-up, the camera module is initialized and some
scripts have to be run manually on the Linux terminal after
boot up. These scripts are used to initialize the AVS API to
the camera module.
Upon successful execution, the camera can be controlled
vocally. The captured image can be reverse-searched on
the Internet to identify objects or scenes from similar
images on the internet.
The entire hardware setup is powered by a 10,000 mAh
power bank. The augmentation can be supplied power
consistently for almost 14 hours. All constituent modules
including the Raspberry Pi, the headset, camera module
and peripherals (if any) are supplied power from the same
source.
Fig -1: Architecture of Raspberry Pi Augmentation
5. CONCLUSION
The augmentation is a cost-effective solution to available
wearable technology. Even its battery backup is more than
10 hours which is significantly ahead of current wearable
technology.
Besides low cost and high energy efficiency, the
augmentation is nowhere as disruptive as its competition.
It does not bother the user with unnecessary information
and its updates, backups are made when the device is being
charged.
The overall design of this wearable device has six modules;
the Raspberry Pi 3, which is the main processing unit of the
augmentation. The second module consists of the display
screen and the augmented reality display. The third
module is the intelligent user interface that responds to the
user’s requests via the fourth module; i.e. voice automation,
which will be effectuated using Alexa Voice Service. The
visual data acquisition will be done by the fifth module,
consisting of a camera without an infra-red filter. A minor
module to facilitate voice input will be a headset with a
microphone that will be used to receive vocal feedback
from the IUI in addition to controlling the wearable via
vocal commands.
International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395 -0056
Volume: 04 Issue: 03 | Mar -2017 www.irjet.net p-ISSN: 2395-0072
© 2017, IRJET | Impact Factor value: 5.181 | ISO 9001:2008 Certified Journal | Page 1442
The wearable acquires data from the user’s surroundings
and process it to provide information that will help the
wearer make better decisions. It works as an assistive
human enhancement by providing real-time information,
generated by processing the visual data from the user’s
surroundings and vocal commands given by the user using
the augmentation’s intelligent user interface.
REFERENCES
[1] L. Zonneveld, H. Dijstelbloem, D. Ringoir, “Reshaping
the human condition: exploring human enhancement,”
Amsterdam School for Cultural Analysis (ASCA), 2008.
[2] Ragib Hasan and Rasib Khan, “A Cloud You can Wear:
Towards a Mobile and Wearable Personal Cloud,”
University of Alabama at Birmingham, 2016.
[3] Pranav Mistry, Pattie Maes, “SixthSense: a wearable
gestural interface,” 2008, MIT Media Laboratory, 2008.
[4] Azuma, R.T., “A survey of augmented reality,” 6(4),
pp.355-385, 1997.
[5] Durlach, Nathaniel I. and Anne S. Mavor, “Virtual
Reality: Scientific and Technological Challenges,” (Report
of the Committee on Virtual Reality Research and
Development to the National Research Council) National
Academy Press. ISBN 0-309-05135-5, 1995.
[6] Wellner, Pierre, “Interacting with Paper on the
DigitalDesk,” CACM 36, 7, 86-96, July 1993.
[7] A.K. Sin and H.B. Zaman, “Live Solar System (LSS):
Evaluation of an Augmented Reality Book-Based
Educational Tool,” Proc. Int’l Symp. Information
Technology (ITSim), vol. 1, pp. 1-6, June 2010.
[8] B.E. Shelton and N.R. Hedley, “Exploring a Cognitive
Basis for Learning Spatial Relationships with Augmented
Reality,” Technology, Instruction, Cognition and Learning,
vol. 1, no. 4, pp. 323- 357, 2004.
[9] Raspberry Pi - Teach, Learn, and Make with Raspberry
Pi, [Online]: http://www.raspberrypi.org/help/faqs
[10] M.B.I Reaz, “Artificial Intelligence Techniques for
Advanced Smart Home Implementation,” ACTA TECHNICA
CORVINIENSIS- Bulletin of Engineering, ISSN 2067-3809,
2013.
[11] Sherri Stevens, “Giving Voice to Research,” CASRO
Digital Research Conference, Nashville, 2015.K. Elissa,
“Title of paper if known,” unpublished.

More Related Content

PPTX
Cognitive computing ppt.
PDF
Deep Learning disruption
DOCX
Seminar report meta
PPT
Smart Playing Cards A Ubiquitous Computing Game
PDF
SmartData Webinar: Commercial Cognitive Computing -- How to choose and build ...
PDF
CLASSIFICATION OF SMART ENVIRONMENT SCENARIOS IN COMBINATION WITH A HUMANWEAR...
PDF
"Imaging + AI: Opportunities Inside the Car and Beyond," a Presentation from ...
PDF
IRJET- Data Visualization using Augmented Reality
Cognitive computing ppt.
Deep Learning disruption
Seminar report meta
Smart Playing Cards A Ubiquitous Computing Game
SmartData Webinar: Commercial Cognitive Computing -- How to choose and build ...
CLASSIFICATION OF SMART ENVIRONMENT SCENARIOS IN COMBINATION WITH A HUMANWEAR...
"Imaging + AI: Opportunities Inside the Car and Beyond," a Presentation from ...
IRJET- Data Visualization using Augmented Reality

What's hot (20)

DOCX
What is Deep Learning?
PPT
Dl 0n mobile jeff shomaker_jan-2018_final
PDF
IRJET- Educatar: Dissemination of Conceptualized Information using Augmented ...
PPTX
Future of Technology in Social Media
PDF
Cognitive Automation - Your AI Coworker
DOCX
Iot term paper_sonu_18
PDF
Sixth Sense Technology
PDF
IBM Watson: How it Works, and What it means for Society beyond winning Jeopardy!
PPTX
Cognitive Computing : IBM Watson
PDF
Cognitive Computing - A Primer
PPT
Ubiquitous Computing
PPTX
9/9/16 Top 5 Deep Learning
PDF
A Study on Wearable Gestural Interface – A SixthSense Technology
PDF
IBM Watson & Cognitive Computing - Tech In Asia 2016
PDF
Survey of Foraging Techniques
PDF
A multi-task learning based hybrid prediction algorithm for privacy preservin...
PDF
Web 2.0 Collective Intelligence - How to use collective intelligence techniqu...
PDF
Wireless and uninstrumented communication by gestures for deaf and mute based...
PPT
Augmented Reality 3D Pop-up Book: a Educational Research
PDF
HGR-thesis
What is Deep Learning?
Dl 0n mobile jeff shomaker_jan-2018_final
IRJET- Educatar: Dissemination of Conceptualized Information using Augmented ...
Future of Technology in Social Media
Cognitive Automation - Your AI Coworker
Iot term paper_sonu_18
Sixth Sense Technology
IBM Watson: How it Works, and What it means for Society beyond winning Jeopardy!
Cognitive Computing : IBM Watson
Cognitive Computing - A Primer
Ubiquitous Computing
9/9/16 Top 5 Deep Learning
A Study on Wearable Gestural Interface – A SixthSense Technology
IBM Watson & Cognitive Computing - Tech In Asia 2016
Survey of Foraging Techniques
A multi-task learning based hybrid prediction algorithm for privacy preservin...
Web 2.0 Collective Intelligence - How to use collective intelligence techniqu...
Wireless and uninstrumented communication by gestures for deaf and mute based...
Augmented Reality 3D Pop-up Book: a Educational Research
HGR-thesis
Ad

Similar to Raspberry Pi Augmentation: A Cost Effective Solution To Google Glass (20)

DOCX
Project glass ieee document
DOCX
Augumented Reality
PDF
A Perspective on Distributed and Collaborative Augmented Reality
PPTX
Augmented reality
PPTX
sixth sense presentation
PPTX
New technology
PDF
Visual, navigation and communication aid for visually impaired person
PPTX
Introduction to Wearable Technology for Creatives
PDF
Augmented Reality And Its Science
PDF
Immersive Technologies in 5G-Enabled Applications: Some Technical Challenges ...
PDF
IMMERSIVE TECHNOLOGIES IN 5G-ENABLED APPLICATIONS: SOME TECHNICAL CHALLENGES ...
PDF
Tangible A
PDF
The computer for the 21st century
PDF
Signal rich media white paper
PDF
50120130405025
PDF
Major unity mid sem 8th (1)
DOCX
Sixth sense technology
PPTX
Unit 6-SCIENCE AND TECHNOLOGY (R&S).pptx
DOCX
finalgoogle
PDF
IRJET - For(E)Sight :A Perceptive Device to Assist Blind People
Project glass ieee document
Augumented Reality
A Perspective on Distributed and Collaborative Augmented Reality
Augmented reality
sixth sense presentation
New technology
Visual, navigation and communication aid for visually impaired person
Introduction to Wearable Technology for Creatives
Augmented Reality And Its Science
Immersive Technologies in 5G-Enabled Applications: Some Technical Challenges ...
IMMERSIVE TECHNOLOGIES IN 5G-ENABLED APPLICATIONS: SOME TECHNICAL CHALLENGES ...
Tangible A
The computer for the 21st century
Signal rich media white paper
50120130405025
Major unity mid sem 8th (1)
Sixth sense technology
Unit 6-SCIENCE AND TECHNOLOGY (R&S).pptx
finalgoogle
IRJET - For(E)Sight :A Perceptive Device to Assist Blind People
Ad

More from IRJET Journal (20)

PDF
Enhanced heart disease prediction using SKNDGR ensemble Machine Learning Model
PDF
Utilizing Biomedical Waste for Sustainable Brick Manufacturing: A Novel Appro...
PDF
Kiona – A Smart Society Automation Project
PDF
DESIGN AND DEVELOPMENT OF BATTERY THERMAL MANAGEMENT SYSTEM USING PHASE CHANG...
PDF
Invest in Innovation: Empowering Ideas through Blockchain Based Crowdfunding
PDF
SPACE WATCH YOUR REAL-TIME SPACE INFORMATION HUB
PDF
A Review on Influence of Fluid Viscous Damper on The Behaviour of Multi-store...
PDF
Wireless Arduino Control via Mobile: Eliminating the Need for a Dedicated Wir...
PDF
Explainable AI(XAI) using LIME and Disease Detection in Mango Leaf by Transfe...
PDF
BRAIN TUMOUR DETECTION AND CLASSIFICATION
PDF
The Project Manager as an ambassador of the contract. The case of NEC4 ECC co...
PDF
"Enhanced Heat Transfer Performance in Shell and Tube Heat Exchangers: A CFD ...
PDF
Advancements in CFD Analysis of Shell and Tube Heat Exchangers with Nanofluid...
PDF
Breast Cancer Detection using Computer Vision
PDF
Auto-Charging E-Vehicle with its battery Management.
PDF
Analysis of high energy charge particle in the Heliosphere
PDF
A Novel System for Recommending Agricultural Crops Using Machine Learning App...
PDF
Auto-Charging E-Vehicle with its battery Management.
PDF
Analysis of high energy charge particle in the Heliosphere
PDF
Wireless Arduino Control via Mobile: Eliminating the Need for a Dedicated Wir...
Enhanced heart disease prediction using SKNDGR ensemble Machine Learning Model
Utilizing Biomedical Waste for Sustainable Brick Manufacturing: A Novel Appro...
Kiona – A Smart Society Automation Project
DESIGN AND DEVELOPMENT OF BATTERY THERMAL MANAGEMENT SYSTEM USING PHASE CHANG...
Invest in Innovation: Empowering Ideas through Blockchain Based Crowdfunding
SPACE WATCH YOUR REAL-TIME SPACE INFORMATION HUB
A Review on Influence of Fluid Viscous Damper on The Behaviour of Multi-store...
Wireless Arduino Control via Mobile: Eliminating the Need for a Dedicated Wir...
Explainable AI(XAI) using LIME and Disease Detection in Mango Leaf by Transfe...
BRAIN TUMOUR DETECTION AND CLASSIFICATION
The Project Manager as an ambassador of the contract. The case of NEC4 ECC co...
"Enhanced Heat Transfer Performance in Shell and Tube Heat Exchangers: A CFD ...
Advancements in CFD Analysis of Shell and Tube Heat Exchangers with Nanofluid...
Breast Cancer Detection using Computer Vision
Auto-Charging E-Vehicle with its battery Management.
Analysis of high energy charge particle in the Heliosphere
A Novel System for Recommending Agricultural Crops Using Machine Learning App...
Auto-Charging E-Vehicle with its battery Management.
Analysis of high energy charge particle in the Heliosphere
Wireless Arduino Control via Mobile: Eliminating the Need for a Dedicated Wir...

Recently uploaded (20)

PDF
Digital Logic Computer Design lecture notes
PPTX
Lecture Notes Electrical Wiring System Components
PDF
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
PPTX
Internet of Things (IOT) - A guide to understanding
PPTX
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
DOCX
573137875-Attendance-Management-System-original
PDF
Embodied AI: Ushering in the Next Era of Intelligent Systems
PDF
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
PDF
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
PPTX
bas. eng. economics group 4 presentation 1.pptx
PPTX
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
PPTX
CYBER-CRIMES AND SECURITY A guide to understanding
PDF
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
DOCX
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
PDF
Well-logging-methods_new................
PDF
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
PPT
Mechanical Engineering MATERIALS Selection
PPTX
Construction Project Organization Group 2.pptx
PPTX
Strings in CPP - Strings in C++ are sequences of characters used to store and...
PPTX
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
Digital Logic Computer Design lecture notes
Lecture Notes Electrical Wiring System Components
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
Internet of Things (IOT) - A guide to understanding
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
573137875-Attendance-Management-System-original
Embodied AI: Ushering in the Next Era of Intelligent Systems
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
bas. eng. economics group 4 presentation 1.pptx
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
CYBER-CRIMES AND SECURITY A guide to understanding
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
Well-logging-methods_new................
Evaluating the Democratization of the Turkish Armed Forces from a Normative P...
Mechanical Engineering MATERIALS Selection
Construction Project Organization Group 2.pptx
Strings in CPP - Strings in C++ are sequences of characters used to store and...
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx

Raspberry Pi Augmentation: A Cost Effective Solution To Google Glass

  • 1. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395 -0056 Volume: 04 Issue: 03 | Mar -2017 www.irjet.net p-ISSN: 2395-0072 © 2017, IRJET | Impact Factor value: 5.181 | ISO 9001:2008 Certified Journal | Page 1439 Raspberry Pi Augmentation: A cost effective solution to Google Glass Nikhil Paonikar1, Samuel Kumar2, Manoj Bramhe3 1,2UG student, Department of Information Technology, St. Vincent Pallotti College of Engineering and Technology, Maharashtra, India 3Associate Professor and H.O.D., Department of Information Technology, St. Vincent Pallotti College of Engineering and Technology, Maharashtra, India ---------------------------------------------------------------------***--------------------------------------------------------------------- Abstract - In this paper, we present the concept of using a Raspberry Pi as the main computer. This human enhancement is voice controlled, uses a camera module to interact with the outside world and process the retrieved information for real-time use by the wearer. It uses this augmentation as a brain-computer interface to connect to the web, extract and process information from the physical world. As a forward stride in wearable technology, this device intends to employ the disciplines of speech synthesis, cloud computing and image processing in order to serve the user as a primary portable computer that is voice-enabled and always ready for interaction. Key Words: Wearable Technology, Augmentation, Wearable Device, Human Enhancement, Brain- Computer Interface, Human-computer interaction. 1. INTRODUCTION The terms “wearable technology” refers to electronic devices, particularly computers that are incorporated into items of clothing and more eminent in accessories which can comfortably be worn on the body. Wearable devices are capable of performing many of the same computing tasks as smartphones, tablets and laptops; however, in some cases, wearable technology are proficient enough to outperform these portable devices entirely. Wearable tech tend to be better engineered and often employ cutting edge technologies that are rarely found in hand-held technology on the market today. These relatively new wearable devices can provide sensory and scanning features not usually seen in mobile and laptop devices, such as biofeedback and tracking of physiological function. The wearable is being developed to make it function as a human enhancement. It is engineered to acquire data from the surrounding environment of the user through a camera integrated within the head-mounted augmentation and processing that data in real-time via the Raspberry Pi. The information obtained by processing this data will be then projected onto a semi-transparent glass which would be suspended to the anterior of the augmentation. This augmentation also has an artificial intelligence that responds to the user’s voice and performs the instructed tasks. The system has the capability of detecting faces and motion to some degree. 2. LITERATURE SURVEY According to Kevin Warwick in [1], what constitutes a Brain-Computer Interface (BCI) can be extremely broad. A standard keyboard could be so regarded. It is clear however, that virtual reality systems, e.g. glasses containing a miniature computer screen for a remote visual experience (Mann 1997), are felt by some researchers to fit this category. Certain body conditions, such as stress or alertness, can be indeed be monitored in this way. In their paper on a wearable personal cloud [2], authors Hasan and Khan describe personal terminal devices as interactive devices capable of wireless communications like WiFi, Bluetooth, ZigBee. Such devices include, but are not limited to, smartphones, smartwatches, tablet computers, smart-glasses, health monitors, and other IoT devices. They even posit that this kind of wearable tech utilizes lower hardware specifications to function as resource constraint devices, in terms of computational power, battery, memory and storage. Computer vision [3] is the technology in which machines are able to interpret/extract necessary information from an image. Computer vision technology includes various fields like image processing, image analysis and machine vision. It includes certain aspect of artificial intelligence techniques like pattern recognition. The machines which implement computer vision techniques require image sensors which detect electromagnetic radiation which are usually in the form of ultraviolet rays or light rays. In his paper [4], Ronald T. Azuma describes a unique characteristic of augmented reality; besides adding objects to a real environment, AR also has the potential to remove them. Current work has focused on adding virtual objects to a real environment. However, graphic overlays might also be used to remove or hide parts of the real environment from a user. For example, to remove a desk in
  • 2. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395 -0056 Volume: 04 Issue: 03 | Mar -2017 www.irjet.net p-ISSN: 2395-0072 © 2017, IRJET | Impact Factor value: 5.181 | ISO 9001:2008 Certified Journal | Page 1440 the real environment, draw a representation of the real walls and floors behind the desk and "paint" that over the real desk, effectively removing it from the user's sight. This has been done in feature films. Doing this interactively in an AR system will be much harder, but this removal may not need to be photorealistic to be effective. Furthermore, in [5], Durlach tells us that augmented reality might apply to all senses, not just sight. So far, researchers have focused on blending real and virtual images and graphics. However, AR could be extended to include sound. The user would wear headphones equipped with microphones on the outside. The headphones would add synthetic, directional 3-D sound, while the external microphones would detect incoming sounds from the environment. This would give the system a chance to mask or cover up selected real sounds from the environment by generating a masking signal that exactly cancelled the incoming real sound. While this would not be easy to do, it might be possible as discerned in [6]. Another example is haptics; the science of applying touch (tactile) sensation and control to interaction with computer applications. Gloves with devices that provide tactile feedback might augment real forces in the environment. E.g., a user might run his hand over the surface of a real desk. Simulating such a hard surface virtually is fairly difficult, but it is easy to do in reality. Then the tactile effectors in the glove can augment the feel of the desk, perhaps making it feel rough in certain spots. This capability might be useful in some applications, such as providing an additional cue that a virtual object is at a particular location on a real desk. In [7], Sin and Zaman used the glasses metaphor to present virtual heavenly bodies on AR markers which they can manipulate in front of them. Students wore a head- mounted display so that both of their hands would be free to handle the markers containing the virtual solar system. A similar study in [8], by Shelton and Hedley had argued that this visualization is advantageous because students can more easily understand concepts such as day and night when they can test for themselves what happens when one side of the earth is occluded. The Raspberry Pi is a credit card-sized computer that plugs into your TV and a keyboard. It is a capable little computer which can be used in electronics projects, and for many of the things that your desktop PC does, like spreadsheets, word processing, browsing the internet, and playing games. It also plays high-definition video. It is designed to cost around Rs. 2,600 for the latest model [9]. According to Reaz, et al. in [10], artificial intelligence has played a crucial part in the design and implementation of future houses. Early research focused on the control of home appliances but current trends are moving into a creation of self-thinking home. In the recent years many research projects were performed utilizing artificial intelligence tools and techniques. Also, research projects employing multi-agent system (MAS); is a system composed of several agents, capable of speedy mutual interaction between them, action prediction, artificial neural network, fuzzy logic and reinforcement learning. It is found that the combination of tools and techniques are crucial for successful implementation. A platform for future relative studies between different algorithms, architectures which serves as a reference point for developing more cutting edge smart home technologies has been theorized. According to Sherri Stevens in her research on the leading voice controlled UIs in [11], a major issue in the marketing research industry is the fact that the majority of surveys are not well suited to a mobile device. In many cases, it is difficult to see all of the answer choices on a small screen device, as answer choices on a second page are less likely to be selected. Using the text to voice feature on mobile phones may be a solution to this problem. We tested several long answer lists – resulting in sensible data. This area of research should be explored further as a way to obtain reliable research data from respondents using a small screen device when long answer lists are unavoidable. 3. SYSTEM DESCRIPTION 3.1 Raspberry Pi Raspberry Pi (4× ARM Cortex-A53, 1.2GHz) is used as the primary computer. Raspberry pi is a credit card sized computer that runs a Linux distribution. It is used to connect the camera module and perform processing for object recognition. It is also used to set up the hands-free prototype of Alexa Voice Service by registering as a developer and creating a client token. 3.2 Raspberry Pi Camera NoIR v2 The infrared Camera Module v2 (Pi NoIR) has a Sony IMX219 8-megapixel sensor (compared to the 5-megapixel OmniVision OV5647 sensor of the original camera). The Pi NoIR offers everything the regular Camera Module offers, with one difference: it does not employ an infrared filter. (NoIR = No Infrared.) This means that pictures taken in daylight look decidedly curious, but gives us the ability to see in the dark with infrared lighting. 3.3 Augmented Reality Display The augmented reality display is composed of a LCD display screen that is projected on a semi-transparent mirror through a Fresnel lens. The LCD display is connected to the Raspberry Pi via GPIO pins. The “/boot/config.txt “ file is modified to display the contents
  • 3. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395 -0056 Volume: 04 Issue: 03 | Mar -2017 www.irjet.net p-ISSN: 2395-0072 © 2017, IRJET | Impact Factor value: 5.181 | ISO 9001:2008 Certified Journal | Page 1441 of -+the screen as a mirror image; so as to project the original content onto the semi-transparent mirror seamlessly. 3.4 Intelligent User Interface Verbal commands are interpreted by a voice processing module. Amazon’s Alexa Voice Service (AVS) is used to take user input and produce results in the form of auditory or visual information. AVS is Amazon’s facility to provide voice control to any device or electronic product. 3.5 Head-mount The head-mount is a mechanical framework on which all the components will be physically installed. It functions as a helmet of sorts which can be strapped on to the user’s head. All hardware components are fit over and around the head-mount to maintain coherence of the augmentation. 4. SYSTEM ARCHITECTURE The primary processing is done on the Raspberry Pi which also serves as the core module to the rest of the modules. A screen is connected to the Pi via GPIO pins, or secondarily via a VNC server connection. The contents of the screen are displayed onto the semi- transparent mirror which serves as a real-time heads-up display for the user. During boot-up, the camera module is initialized and some scripts have to be run manually on the Linux terminal after boot up. These scripts are used to initialize the AVS API to the camera module. Upon successful execution, the camera can be controlled vocally. The captured image can be reverse-searched on the Internet to identify objects or scenes from similar images on the internet. The entire hardware setup is powered by a 10,000 mAh power bank. The augmentation can be supplied power consistently for almost 14 hours. All constituent modules including the Raspberry Pi, the headset, camera module and peripherals (if any) are supplied power from the same source. Fig -1: Architecture of Raspberry Pi Augmentation 5. CONCLUSION The augmentation is a cost-effective solution to available wearable technology. Even its battery backup is more than 10 hours which is significantly ahead of current wearable technology. Besides low cost and high energy efficiency, the augmentation is nowhere as disruptive as its competition. It does not bother the user with unnecessary information and its updates, backups are made when the device is being charged. The overall design of this wearable device has six modules; the Raspberry Pi 3, which is the main processing unit of the augmentation. The second module consists of the display screen and the augmented reality display. The third module is the intelligent user interface that responds to the user’s requests via the fourth module; i.e. voice automation, which will be effectuated using Alexa Voice Service. The visual data acquisition will be done by the fifth module, consisting of a camera without an infra-red filter. A minor module to facilitate voice input will be a headset with a microphone that will be used to receive vocal feedback from the IUI in addition to controlling the wearable via vocal commands.
  • 4. International Research Journal of Engineering and Technology (IRJET) e-ISSN: 2395 -0056 Volume: 04 Issue: 03 | Mar -2017 www.irjet.net p-ISSN: 2395-0072 © 2017, IRJET | Impact Factor value: 5.181 | ISO 9001:2008 Certified Journal | Page 1442 The wearable acquires data from the user’s surroundings and process it to provide information that will help the wearer make better decisions. It works as an assistive human enhancement by providing real-time information, generated by processing the visual data from the user’s surroundings and vocal commands given by the user using the augmentation’s intelligent user interface. REFERENCES [1] L. Zonneveld, H. Dijstelbloem, D. Ringoir, “Reshaping the human condition: exploring human enhancement,” Amsterdam School for Cultural Analysis (ASCA), 2008. [2] Ragib Hasan and Rasib Khan, “A Cloud You can Wear: Towards a Mobile and Wearable Personal Cloud,” University of Alabama at Birmingham, 2016. [3] Pranav Mistry, Pattie Maes, “SixthSense: a wearable gestural interface,” 2008, MIT Media Laboratory, 2008. [4] Azuma, R.T., “A survey of augmented reality,” 6(4), pp.355-385, 1997. [5] Durlach, Nathaniel I. and Anne S. Mavor, “Virtual Reality: Scientific and Technological Challenges,” (Report of the Committee on Virtual Reality Research and Development to the National Research Council) National Academy Press. ISBN 0-309-05135-5, 1995. [6] Wellner, Pierre, “Interacting with Paper on the DigitalDesk,” CACM 36, 7, 86-96, July 1993. [7] A.K. Sin and H.B. Zaman, “Live Solar System (LSS): Evaluation of an Augmented Reality Book-Based Educational Tool,” Proc. Int’l Symp. Information Technology (ITSim), vol. 1, pp. 1-6, June 2010. [8] B.E. Shelton and N.R. Hedley, “Exploring a Cognitive Basis for Learning Spatial Relationships with Augmented Reality,” Technology, Instruction, Cognition and Learning, vol. 1, no. 4, pp. 323- 357, 2004. [9] Raspberry Pi - Teach, Learn, and Make with Raspberry Pi, [Online]: http://www.raspberrypi.org/help/faqs [10] M.B.I Reaz, “Artificial Intelligence Techniques for Advanced Smart Home Implementation,” ACTA TECHNICA CORVINIENSIS- Bulletin of Engineering, ISSN 2067-3809, 2013. [11] Sherri Stevens, “Giving Voice to Research,” CASRO Digital Research Conference, Nashville, 2015.K. Elissa, “Title of paper if known,” unpublished.