SlideShare a Scribd company logo
OBSTACLE DETECTION USING LASER
ABSTRACT
Cameras are the eye of robot. The movements of the robots are
controlled by the images obtained by analyzing the images captured by
this cameras. So the inputs to the robot should be capable of producing
a 3D image. But our ordinary camera is not able to capture 3D image
directly. There are several methods available to convert a 2D image into
3D image.But most of the methods use more than one camera in it and
some of them are not capable to work in dark conditions.Here, we are
introducing a new method to obtain a 3D image using a single camera
which can be implemented even in the absence of light.Beagleboard xm
is used as the image processing platform.
PROBLEM DEFINITION
INTRODUCTION
ABSTRACT
In our project we are trying to identify an object from its
environment using line laser and obtain its 3D perspective
view by capturing the image of the object constantly using a
high definition webcam and processing it using our algorithm.
Computer vision is the field concerned with the automated
processing of images from the real world to extract and interpret
information on a real time basis. It is the science and technology of
machines that see, which includes methods for acquiring,
processing, analyzing, and understanding images.
High-dimensional data from the real world is needed in order to
produce numerical or symbolic information in the form of decisions.
Trend in this field is to duplicate the abilities of human vision
electronically.
The applications of computer vision extends from machine
vision used in industries for assembly line manufacturing,
bio-medical field for analyzing the X-RAY and Microscopy images,
which are used for diagnosing patients and in military field for
detecting enemy movement, missile guidance etc.
Latest application areas of computer vision are autonomous
vehicles, which include submersibles, land-based vehicles (small
robots with wheels, cars or trucks), +aerial vehicles, and
Unmanned Aerial Vehicles (UAV). The level of autonomy ranges
from fully autonomous (unmanned) vehicles to the vehicles where
computer vision based systems support a driver or a pilot in
various situations. Fully autonomous vehicles typically use
computer vision for navigation, i.e. for knowing where it is, or for
producing a map of its environment and for detecting obstacles. It
can also be used for detecting certain task specific events, e. g., a
UAV looking for forest fires. Examples of supporting systems are
obstacle warning systems in cars, and systems for autonomous
landing of aircraft. Several car manufacturers have demonstrated
systems for autonomous driving of cars, but this technology has
still not reached a level where it can be put on the market. There
are ample examples of military autonomous vehicles ranging from
advanced missiles, to UAVs for recon missions or missile
guidance. Space exploration is already being made with
autonomous vehicles using computer vision, e.g. NASA's Mars
Exploration Rover and ESA's ExoMars Rover.
IV.PROJECT ANALYSIS AND DESIGN
Computer vision based robot can solve many problems faced
by humans in day today life. It can be helpful in fully
automated factory like a bottling plant by placing bottle caps
on every filled bottle using its ability to see and detect the
bottles. These robots find extensive application in NASA, ESA
and other space agencies for their man less missions to mars,
moon and other planetary bodies. They are also used as
ATV's (All Terrain Vehicles) that reach the places where
human cannot reach, like deep ocean etc. They are used at
time of disasters to rescue people caught below debris using
their Infra Red cameras.
Unmanned Aerial Vehicles (UAV's) are the pride of all
defense forces across the world. These are multipurpose
aircraft which can be used for aerial inspection of a place, as
a stealth aircraft for spying and also for attacking purposes
also. Driverless cars are said to be the future of automobile
industry. A driverless car should have very powerful camera,
processing unit and a very good algorithm.
Our project can be said to be miniature form of all the above
situations. Here we consider only a small case; the robot
detecting the object using laser and creating its 3D view .
Real time image processing of our robot helps it to accurately
detect an object by following the track of the laser.
Figure 1. Block Diagram
I.
II.
III.
The image acquired using a webcam ( HD i Ball) . The image taken
by the cam is transferred to the Beagle Board through USB.
Suitable cam is selected after familiarizing with beagle board. The
Beagle Board-XM is a low power, low cost single board computer
produced by Texas Instruments in association with Digi-Key.
Processing is done using beagle board. Software used for
processing is openCV in c.
The c code in openCV is executed using terminal program in Linux.
Servo motor controlled by Arduino UNO is used control the
movement of laser for scanning purpose .The laser is mounted over
the motor and is swayed back & forth using instructions from
Arduino program. The cam captures the frame of the object being
scanned which is then manipulated using openCV source code to
obtain our objective. The desired image thus obtained is displayed
at real time There is no physical connection between beagle board
and arduino board.
Object detection and its 3D generation consists of different
phases, which consist of three main regions :- familiarizing
BeagleBoard , design of physical part scanning section and image
processing.
FAMILIARISING BEAGLEBOARD
As we have the algorithm we are in need of a stand-alone system
for image acquisition and processing. So we took beagle board,
which is a single board computer system and it has an ARM–A8
processor with OMAP 3530 dsp processor as our computer. We
studied Linux OS for familiarizing operation of the beagle board
and ported Angstrom Linux (a Linux version for embedded
system developers) to SBC. Porting angstrom Linux into beagle
board is done by booting linux into the SD card which acts as a
secondary memory for the board. The following are the
instructions for booting Angstrom into the SD card. The following
instructions are performed in terminal in a lubuntu platform.
Installing Angstrom on the BeagleBoard-xM
mkdir ~/angstrom-wrk
cd ~/angstrom-wrk
wget
http://cgit.openembedded.org/cgit.cgi/openembedded/plain/contrib/
angstrom/omap3-mkcard.sh
chmod +x omap3-mkcard.sh
sudo ./omap3-mkcard.sh /dev/sdX
sync
df –h
# extract the files to the ./boot directory
tar --wildcards -xjvf [YOUR-DOWNLOAD-FILE].tar.bz2 ./boot/*
# copy the files to sc-card boot partition.
cp boot/MLO-* /media/boot/MLO
cp boot/uImage-* /media/boot/uImage
cp boot/u-boot-*.bin /media/boot/u-boot.bin
sync
sudo tar -xvj -C /media/Angstrom -f
[YOUR-DOWNLOAD-FILE].tar.bz2
sync
sync
umount /media/boot
umount /media/Angstrom
V. DESIGN OF PHYSICAL PART OF SCANNING
SECTION
Servos have integrated gears and a shaft that can be precisely
controlled. Standard servos allow the shaft to be positioned at
various angles, usually between 0 and 180 degrees. Continuous
rotation servos allow the rotation of the shaft to be set to various
speeds. Here we are employing Arduino UNO board to control
the movements of the laser. The required hardware and circuitry
is explained below
Hardware Required
1) Arduino Board
2) Servo Motor
3) hook-up wire
Circuit
Servo motors have three wires: power, ground, and signal. The
power wire is typically red, and should be connected to the 5V
pin on the Arduino board. The ground wire is typically black or
brown and should be connected to a ground pin on the Arduino
board. The signal pin is typically yellow, orange or white and
should be connected to a digital pin on the Arduino board. Note
servos draw considerable power, so if we need to drive more
than one or two, we'll probably need to power them from a
separate supply (i.e. not the +5V pin on your Arduino). Be sure
to connect the grounds of the Arduino and external power
supply together.
Figure 2 Circuit set up for arduino board
VI. IMAGE PROCESSING
The operation of image processing is performed in openCV 2.4.1
platform. So before proceeding with the processing of image
captured by cam ,first we need openCV which is loaded into
Ubuntu OS and the instructions are executed in terminal program
provided by Ubuntu OS .The following section how to install
openCV in Ubuntu OS.
VII.PROGRAM EXPLANATION
Case 1- reference plane without object
Befpre we begin with scanning of the object we plan a reference line
on either side of the plane so that the object we wish to scan should
within this reference line.By doing this it is made sure that the length
of the laser line is greater than the object size so as to detect the
change or shift in the path of the laser line once it passes over the
object. The first step involved is obtaining continuous video
sequence from the camera through which a frame is captured using
opencv frame capture instruction. The captured image is stored in
the form of matrix of pixel where each pixel occupies address
location indicated by rows and columns. The first step involved in
image processing is to obtain the first pixel from the image matrix i.e.
the first pixel location is assign row no:1 and col:1.then RGB color
corresponding to this pixel is retrieved. After this it is checked
whether the red color intensity corresponding to this pixel is greater
than green, blue color intensity of the same pixel. If yes the assign
new value to RGB channel which is 255 i.e. white and if the intensity
of red color is less then give another value to RGB channel which is
0 i.e. black.
Case 2- reference plane with object
Repeat the above steps for each pixel corresponding to that
row by incrementing the columns. This process is repeated till
the entire pixel of an image is processed. By doing this we will
be getting a grey level image corresponding RGB image. Now
we have an image with object in black and background in
white.The next step is obtaining an 3D perspective view of an
object. Here the main idea involved is that when a laser line
nears an object end it gets displaced by an amount equal to
the width of the object. So there is a displacement of line of
laser from its reference line as it nears an object end and
when RGB to grey scale conversion is performed we get white
region for red line up to the point where it is displaced. We
note row no and col no of the line where it is getting displaced.
By keeping col no constant we move to the row no where the
displaced line appears. We will check if the RGB color
corresponding to this pixel location is red or not. If yes then
note the row no of this pixel in a separate variable and
substract the previous row no with the now obtained row no to
obtain a non zero value ( say x). This value x is then multiplied
with arbitrary defined rgv value which is then applied at the
end the object to obtain the perspective view of the object.
VIII.FLOW CHART
IX.CONCLUSION
By using a single camera and line laser we are able to
produce 3D perspective view of any object even in the
absence of light.
Our project, the object detection is the only concern.
According to the type of object the project can be
extended to either avoiding the object or take necessary
actions against it. It can also used as terrain mapping in
robotics. There is a small delay in image processing
operation. As there is no perfect synchronization
between cam data rate and beagleboard there is a
chance for the formation of noise in an image. Use of
DSP optimizimation can be used to overcome this
problem.
X.RESULT
Fig.Obstacle Detection output result
Fig.3D Perspective View output

More Related Content

DOCX
MOTION CAPTURE TECHNOLOGY
PPTX
Motion capture
PPTX
Motion Capture Technology Computer Graphics
PPT
Motion capture technology
PPTX
DIY motion capture with KinectToPin
DOCX
Motion capture document
PPTX
Motion Capture Technology
MOTION CAPTURE TECHNOLOGY
Motion capture
Motion Capture Technology Computer Graphics
Motion capture technology
DIY motion capture with KinectToPin
Motion capture document
Motion Capture Technology

What's hot (20)

PDF
All About Robotics (pdf)
PPT
Motion Capture
PPTX
Motion Capturing Technology
PPTX
Motion capture technology
DOCX
Motion capture technology
PPTX
Motion capture technology
PPTX
Motion capture technology
PPTX
Motion capture technology
PDF
Unit III - Solved Question Bank- Robotics Engineering -
PPTX
Motion Capture Technology
PDF
Seminar report on image sensor
PPT
CCD (Charge Coupled Device)
PPTX
Androidで出来る!! KinectとiPadを使った亀ロボ
PPT
Report On Image Sensors
PPTX
Robots and new technologies
PPTX
2015 ccd-versus-cmos-update
PPT
PPT
Eet3131 ccd cmos_presentation2
PDF
Maze solving robot
PPTX
Quantum film image sensing ppt
All About Robotics (pdf)
Motion Capture
Motion Capturing Technology
Motion capture technology
Motion capture technology
Motion capture technology
Motion capture technology
Motion capture technology
Unit III - Solved Question Bank- Robotics Engineering -
Motion Capture Technology
Seminar report on image sensor
CCD (Charge Coupled Device)
Androidで出来る!! KinectとiPadを使った亀ロボ
Report On Image Sensors
Robots and new technologies
2015 ccd-versus-cmos-update
Eet3131 ccd cmos_presentation2
Maze solving robot
Quantum film image sensing ppt
Ad

Similar to Obstacle detection using laser (20)

PPTX
SIMULTANEOUS MAPPING AND NAVIGATION FOR RENDEZVOUS IN SPACE APPLICATIONS
PDF
Visual pattern recognition in robotics
PDF
Visual pattern recognition in robotics
PDF
Simultaneous Mapping and Navigation For Rendezvous in Space Applications
PDF
Interfacing of MATLAB with Arduino for Object Detection Algorithm Implementat...
PDF
Color Tracking Robot
PDF
Colour tracking robot.pdf
PPT
ROBOTIC-VISION-include image processing techniques
PPT
ROBOTIC-VISION system in the field of robotics
PPT
ROBOTIC-VISION describe the functionality of robot arm
PDF
IRJET- Smart Helmet for Visually Impaired
PDF
AUTO LANDING PROCESS FOR AUTONOMOUS FLYING ROBOT BY USING IMAGE PROCESSING BA...
PDF
Design of Image Segmentation Algorithm for Autonomous Vehicle Navigationusing...
PDF
Arduino_Project_Report
PPTX
Development of image processing based human tracking and control algorithm fo...
PDF
MOBILE REMOTE SURVEILLANCE TOWER
PDF
Maze solving quad_rotor
PDF
Unmanned Ground Vehicle
PDF
IRJET- Cloth Matching and Color Selection using Intelligent Robotic System
PDF
IRJET - The Line Follower -and- Pick and Place Robot
SIMULTANEOUS MAPPING AND NAVIGATION FOR RENDEZVOUS IN SPACE APPLICATIONS
Visual pattern recognition in robotics
Visual pattern recognition in robotics
Simultaneous Mapping and Navigation For Rendezvous in Space Applications
Interfacing of MATLAB with Arduino for Object Detection Algorithm Implementat...
Color Tracking Robot
Colour tracking robot.pdf
ROBOTIC-VISION-include image processing techniques
ROBOTIC-VISION system in the field of robotics
ROBOTIC-VISION describe the functionality of robot arm
IRJET- Smart Helmet for Visually Impaired
AUTO LANDING PROCESS FOR AUTONOMOUS FLYING ROBOT BY USING IMAGE PROCESSING BA...
Design of Image Segmentation Algorithm for Autonomous Vehicle Navigationusing...
Arduino_Project_Report
Development of image processing based human tracking and control algorithm fo...
MOBILE REMOTE SURVEILLANCE TOWER
Maze solving quad_rotor
Unmanned Ground Vehicle
IRJET- Cloth Matching and Color Selection using Intelligent Robotic System
IRJET - The Line Follower -and- Pick and Place Robot
Ad

Recently uploaded (20)

PDF
Spectral efficient network and resource selection model in 5G networks
PPT
Teaching material agriculture food technology
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
Electronic commerce courselecture one. Pdf
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PPTX
Spectroscopy.pptx food analysis technology
PPTX
Big Data Technologies - Introduction.pptx
PPTX
sap open course for s4hana steps from ECC to s4
DOCX
The AUB Centre for AI in Media Proposal.docx
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
Network Security Unit 5.pdf for BCA BBA.
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PDF
Empathic Computing: Creating Shared Understanding
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
Spectral efficient network and resource selection model in 5G networks
Teaching material agriculture food technology
MIND Revenue Release Quarter 2 2025 Press Release
Mobile App Security Testing_ A Comprehensive Guide.pdf
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Electronic commerce courselecture one. Pdf
Digital-Transformation-Roadmap-for-Companies.pptx
Spectroscopy.pptx food analysis technology
Big Data Technologies - Introduction.pptx
sap open course for s4hana steps from ECC to s4
The AUB Centre for AI in Media Proposal.docx
Advanced methodologies resolving dimensionality complications for autism neur...
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Unlocking AI with Model Context Protocol (MCP)
Network Security Unit 5.pdf for BCA BBA.
“AI and Expert System Decision Support & Business Intelligence Systems”
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
Empathic Computing: Creating Shared Understanding
20250228 LYD VKU AI Blended-Learning.pptx
Building Integrated photovoltaic BIPV_UPV.pdf

Obstacle detection using laser

  • 1. OBSTACLE DETECTION USING LASER ABSTRACT Cameras are the eye of robot. The movements of the robots are controlled by the images obtained by analyzing the images captured by this cameras. So the inputs to the robot should be capable of producing a 3D image. But our ordinary camera is not able to capture 3D image directly. There are several methods available to convert a 2D image into 3D image.But most of the methods use more than one camera in it and some of them are not capable to work in dark conditions.Here, we are introducing a new method to obtain a 3D image using a single camera which can be implemented even in the absence of light.Beagleboard xm is used as the image processing platform. PROBLEM DEFINITION INTRODUCTION ABSTRACT In our project we are trying to identify an object from its environment using line laser and obtain its 3D perspective view by capturing the image of the object constantly using a high definition webcam and processing it using our algorithm. Computer vision is the field concerned with the automated processing of images from the real world to extract and interpret information on a real time basis. It is the science and technology of machines that see, which includes methods for acquiring, processing, analyzing, and understanding images. High-dimensional data from the real world is needed in order to produce numerical or symbolic information in the form of decisions. Trend in this field is to duplicate the abilities of human vision electronically. The applications of computer vision extends from machine vision used in industries for assembly line manufacturing, bio-medical field for analyzing the X-RAY and Microscopy images, which are used for diagnosing patients and in military field for detecting enemy movement, missile guidance etc. Latest application areas of computer vision are autonomous vehicles, which include submersibles, land-based vehicles (small robots with wheels, cars or trucks), +aerial vehicles, and Unmanned Aerial Vehicles (UAV). The level of autonomy ranges from fully autonomous (unmanned) vehicles to the vehicles where computer vision based systems support a driver or a pilot in various situations. Fully autonomous vehicles typically use computer vision for navigation, i.e. for knowing where it is, or for producing a map of its environment and for detecting obstacles. It can also be used for detecting certain task specific events, e. g., a UAV looking for forest fires. Examples of supporting systems are obstacle warning systems in cars, and systems for autonomous landing of aircraft. Several car manufacturers have demonstrated systems for autonomous driving of cars, but this technology has still not reached a level where it can be put on the market. There are ample examples of military autonomous vehicles ranging from advanced missiles, to UAVs for recon missions or missile guidance. Space exploration is already being made with autonomous vehicles using computer vision, e.g. NASA's Mars Exploration Rover and ESA's ExoMars Rover. IV.PROJECT ANALYSIS AND DESIGN Computer vision based robot can solve many problems faced by humans in day today life. It can be helpful in fully automated factory like a bottling plant by placing bottle caps on every filled bottle using its ability to see and detect the bottles. These robots find extensive application in NASA, ESA and other space agencies for their man less missions to mars, moon and other planetary bodies. They are also used as ATV's (All Terrain Vehicles) that reach the places where human cannot reach, like deep ocean etc. They are used at time of disasters to rescue people caught below debris using their Infra Red cameras. Unmanned Aerial Vehicles (UAV's) are the pride of all defense forces across the world. These are multipurpose aircraft which can be used for aerial inspection of a place, as a stealth aircraft for spying and also for attacking purposes also. Driverless cars are said to be the future of automobile industry. A driverless car should have very powerful camera, processing unit and a very good algorithm. Our project can be said to be miniature form of all the above situations. Here we consider only a small case; the robot detecting the object using laser and creating its 3D view . Real time image processing of our robot helps it to accurately detect an object by following the track of the laser. Figure 1. Block Diagram I. II. III.
  • 2. The image acquired using a webcam ( HD i Ball) . The image taken by the cam is transferred to the Beagle Board through USB. Suitable cam is selected after familiarizing with beagle board. The Beagle Board-XM is a low power, low cost single board computer produced by Texas Instruments in association with Digi-Key. Processing is done using beagle board. Software used for processing is openCV in c. The c code in openCV is executed using terminal program in Linux. Servo motor controlled by Arduino UNO is used control the movement of laser for scanning purpose .The laser is mounted over the motor and is swayed back & forth using instructions from Arduino program. The cam captures the frame of the object being scanned which is then manipulated using openCV source code to obtain our objective. The desired image thus obtained is displayed at real time There is no physical connection between beagle board and arduino board. Object detection and its 3D generation consists of different phases, which consist of three main regions :- familiarizing BeagleBoard , design of physical part scanning section and image processing. FAMILIARISING BEAGLEBOARD As we have the algorithm we are in need of a stand-alone system for image acquisition and processing. So we took beagle board, which is a single board computer system and it has an ARM–A8 processor with OMAP 3530 dsp processor as our computer. We studied Linux OS for familiarizing operation of the beagle board and ported Angstrom Linux (a Linux version for embedded system developers) to SBC. Porting angstrom Linux into beagle board is done by booting linux into the SD card which acts as a secondary memory for the board. The following are the instructions for booting Angstrom into the SD card. The following instructions are performed in terminal in a lubuntu platform. Installing Angstrom on the BeagleBoard-xM mkdir ~/angstrom-wrk cd ~/angstrom-wrk wget http://cgit.openembedded.org/cgit.cgi/openembedded/plain/contrib/ angstrom/omap3-mkcard.sh chmod +x omap3-mkcard.sh sudo ./omap3-mkcard.sh /dev/sdX sync df –h # extract the files to the ./boot directory tar --wildcards -xjvf [YOUR-DOWNLOAD-FILE].tar.bz2 ./boot/* # copy the files to sc-card boot partition. cp boot/MLO-* /media/boot/MLO cp boot/uImage-* /media/boot/uImage cp boot/u-boot-*.bin /media/boot/u-boot.bin sync sudo tar -xvj -C /media/Angstrom -f [YOUR-DOWNLOAD-FILE].tar.bz2 sync sync umount /media/boot umount /media/Angstrom V. DESIGN OF PHYSICAL PART OF SCANNING SECTION Servos have integrated gears and a shaft that can be precisely controlled. Standard servos allow the shaft to be positioned at various angles, usually between 0 and 180 degrees. Continuous rotation servos allow the rotation of the shaft to be set to various speeds. Here we are employing Arduino UNO board to control the movements of the laser. The required hardware and circuitry is explained below Hardware Required 1) Arduino Board 2) Servo Motor 3) hook-up wire Circuit Servo motors have three wires: power, ground, and signal. The power wire is typically red, and should be connected to the 5V pin on the Arduino board. The ground wire is typically black or brown and should be connected to a ground pin on the Arduino board. The signal pin is typically yellow, orange or white and should be connected to a digital pin on the Arduino board. Note servos draw considerable power, so if we need to drive more than one or two, we'll probably need to power them from a separate supply (i.e. not the +5V pin on your Arduino). Be sure to connect the grounds of the Arduino and external power supply together. Figure 2 Circuit set up for arduino board VI. IMAGE PROCESSING The operation of image processing is performed in openCV 2.4.1 platform. So before proceeding with the processing of image captured by cam ,first we need openCV which is loaded into Ubuntu OS and the instructions are executed in terminal program provided by Ubuntu OS .The following section how to install openCV in Ubuntu OS.
  • 3. VII.PROGRAM EXPLANATION Case 1- reference plane without object Befpre we begin with scanning of the object we plan a reference line on either side of the plane so that the object we wish to scan should within this reference line.By doing this it is made sure that the length of the laser line is greater than the object size so as to detect the change or shift in the path of the laser line once it passes over the object. The first step involved is obtaining continuous video sequence from the camera through which a frame is captured using opencv frame capture instruction. The captured image is stored in the form of matrix of pixel where each pixel occupies address location indicated by rows and columns. The first step involved in image processing is to obtain the first pixel from the image matrix i.e. the first pixel location is assign row no:1 and col:1.then RGB color corresponding to this pixel is retrieved. After this it is checked whether the red color intensity corresponding to this pixel is greater than green, blue color intensity of the same pixel. If yes the assign new value to RGB channel which is 255 i.e. white and if the intensity of red color is less then give another value to RGB channel which is 0 i.e. black. Case 2- reference plane with object Repeat the above steps for each pixel corresponding to that row by incrementing the columns. This process is repeated till the entire pixel of an image is processed. By doing this we will be getting a grey level image corresponding RGB image. Now we have an image with object in black and background in white.The next step is obtaining an 3D perspective view of an object. Here the main idea involved is that when a laser line nears an object end it gets displaced by an amount equal to the width of the object. So there is a displacement of line of laser from its reference line as it nears an object end and when RGB to grey scale conversion is performed we get white region for red line up to the point where it is displaced. We note row no and col no of the line where it is getting displaced. By keeping col no constant we move to the row no where the displaced line appears. We will check if the RGB color corresponding to this pixel location is red or not. If yes then note the row no of this pixel in a separate variable and substract the previous row no with the now obtained row no to obtain a non zero value ( say x). This value x is then multiplied with arbitrary defined rgv value which is then applied at the end the object to obtain the perspective view of the object. VIII.FLOW CHART
  • 4. IX.CONCLUSION By using a single camera and line laser we are able to produce 3D perspective view of any object even in the absence of light. Our project, the object detection is the only concern. According to the type of object the project can be extended to either avoiding the object or take necessary actions against it. It can also used as terrain mapping in robotics. There is a small delay in image processing operation. As there is no perfect synchronization between cam data rate and beagleboard there is a chance for the formation of noise in an image. Use of DSP optimizimation can be used to overcome this problem. X.RESULT Fig.Obstacle Detection output result Fig.3D Perspective View output