SlideShare a Scribd company logo
Replacing the Office Intern:
An Autonomous Coffee Run with a Mobile Robot
Tony Pratkanis
Outline of the Talk
● General Background
● Coffee Grasping Steps
– Navigation
– Doors
– Elevators
– Object Passing
● Lessons Learned
About the Salisbury Robotics Lab
● sr.stanford.edu
Personal Robotics: The PR2
● Based on the PR1 at Salisbury Lab
● Spun out to Willow Garage to become PR2
Personal Robotics: The PR2
● Costs $400,000+, weighs 400 pounds
– More battery capacity than a Prius
– Two laser scanners, many color cameras, Kinect-like depth cameras, two
arms, etc.
– Despite this, it is still human-safe
● The PR2 is a “Kitchen Sink” robot
– Designed exclusively for research purposes
– It has a vast (likely excessive) number of sensors and features
● Ships integrated with ROS
– An open-source robotics middleware developed by Willow Garage
– Vast amounts of useful software including motion planning, navigation,
SLAM, computer vision, 3D object recognition, linear algebra, etc.
The Task
Personal Robotics
● Personal robotics is the creation of robots that live and play safely
and effectively in human-centric environments
– The ideal is “Rosie” from The Jetsons
● Faces many challenges not present in other forms of robotics
– Extremely diverse obstacles and objectives
– Highly unpredictable and unstructured environment
– Safety issues
● My solution to these challenges:
– Analyze the nature of the task and the available information
– Develop simple procedures that exploit that information
● The “coffee bot” allows us to demonstrate this approach
Important Qualifications
● The robot must be fully autonomous
– No human intervention except for interaction with
coffee shop employees
● The environment must be unmodified
– Modification of human environments is often
socially and politically intractable
– Defeats the purpose of building such a robot
Navigation Video
● Navigation basic demo
● Note how the robot intelligently avoids both
static obstacles and people
How Mapping and Navigation Work
●
Two sensors are used
– Wheel odometry
● Very accurate over short distances
● Error builds up
– Laser scanners
● Accurately (to approximately 1cm)
measure distance to objects
●
Integrated by software to create a
detailed map of the environment
– “SLAM” - Simultaneous localization
and mapping
●
Then the map is used for navigation
Navigation and Obstacle Avoidance
●
Laser data, the map, and odometry are
fused for localization
– Particle-filter based approach
– Obtains the position of the robot
● A cost-map grid is built of all obstacles
– Real-time updates of the obstacle grid
– Fed to path-finding algorithms
●
The navigation software was
modified to handle multiple floors
– Leading to “multi_map_navigation”
Door Pushing Video
● Open the door by pushing
● Note how the robot lines up in all three axes
then rotates to open the door
Pushing Doors
● These specific doors are challenging from
two perspectives
– Transparent and thus hard to detect
– Heavy and thus physically hard to open
● The PR2 uses mechanical approaches to
detection instead of vision
● Uses the entire body and strength of the
robot to overcome the doors
●
The robot uses the mapping and navigation software to
approximately (<30 cm) locate the door
● Next, it uses the tilting laser scanner to line up with the door:
– Travels to the correct distance from the door
– Aims laser at the base of the door
● Lines up rotationally using the base of the door
– Aims laser at middle of the door
● The central window of the door leaves a gap in the laser data compared to the
metal sides – the robot centers this gap to align horizontally with the door
●
Then, it spins around and backs through the door
– Backing through the door is important to hit the door metal bar
Pushing Doors
Driving and Door Pulling Video
● Drive to the next door
● The lasers were negatively impacted by the
sun, requiring adjustments to software filters
● Note how the door is pulled open by the robot
Pulling Doors
● Detecting the handles of transparent doors is
difficult
– The background is unpredictable because of the
window
– The window reflects the handle, leading to multiple
images
– The handle itself is shiny, leading to unpredictable
coloration and edge structure
Pulling Doors
● Solution: Purely mechanical approach to handle
detection and door opening
– Once again, the robot uses the map to know the approximate
location of the door
– Drives up to the door and does a “waggle dance” to align with
it mechanically
– Backs up and slides the hand across the door to find the
handle
– Grabs the handle and moves so that the handle is at a fixed
position with respect to the robot
– Dances the door opening dance to open the door
Elevator Video
● Note:
– Avoidance of people on the way to and inside the
elevator
– Elevator operation by finding call and control panel
buttons
– Exiting the elevator on the correct floor
Elevator Overview
● Call the elevator by pushing the call button
– Find the call button and press it
● Wait for the elevator to arrive
– Identify the correct elevator (up or down)
● Enter the elevator
– Avoid humans and obstacles in the elevator
● Push the buttons in the elevator control panel
– Elevator interior poses challenging computer vision problem
● Exit the elevator
– Check to ensure the correct floor
Pushing the Elevator Call Buttons
● PR2 knows the approximate
location of elevator buttons due
to navigation and map
● Lines up with the wall using the
laser scanner
● Finds the button using vision
● Repeats this process if the
elevator does not arrive
Waiting for the Elevator to Arrive
● Scans the indicator lights for
elevator arrival
– Identifies correct direction (up or down)
and elevator (left or right)
● Moves quickly to the elevator before
door closes
– Avoids humans using the laser
– Rule: Only rides in empty elevators
Pushing the Elevator Control
Buttons
● Uses mechanical procedure to
align with buttons
– Similar to door pulling
● Once alignment is achieved, the
buttons are at a fixed position
with respect to the robot
– With the known position and
height, it is easy to press the
correct floor
● If the elevator door fails to open,
then the robot will repeat this
process
Exiting the Elevator
● Waits and uses laser
to detect when the
door opens
● Checks for correct
floor at exit
– Important because
humans may have
ordered the elevator to
stop at additional floors
Exiting the Elevator
● Determining the correct floor
was very challenging
– First approach using vision to
see the floor number sign at
exit too prone to failure
● Not robust to lighting changes
– Second approach using robot's
accelerometer much more
successful
● Detecting time interval between
elevator start and stop accurately
predicts number of floors traveled
● Much more robust
Driving to the Coffee Shop
● More door pulling and pushing
● Note the avoidance of the tables and people
Waiting in the Coffee Shop Line
● The robot drives along a predefined course
using its map
– If the laser detects a person in line, the robot stops
and only advances as the person moves forward
● While this approach works well for many
stores, it would not work in larger stores that
have multiple cashiers
Waiting in Line Video
● Note how the robot drives this course
Ordering and Obtaining the Coffee
● First, give the barista the written coffee order
and cash payment
● Second, take the coffee and place it in the cup
holder
● This requires an intuitive approach to enabling
humans and robots to pass objects to each
other
How Do Humans Pass Objects?
● Humans pass objects by two main approaches:
1. Receiver holds out his/her hand, giver places object into hand
2. Giver holds out object, receiver grabs object
● Thus, by using Case 1 to receive objects and Case 2 to give objects,
the robot never needs to find the object or the human's hand: it can
just hold out its hand or the object
● Humans are also very good at knowing when to let go of objects
– Humans hold onto an object until they feel the other person pull the object
back from the hand, ensuring a good grip
Giving an Object to a Human
● Object giving sequence used by PR2:
1. Holds out the robot's hand with the object
2. Uses text-to-speech to tell the person to take
the object
3. Releases the object when either of two
conditions is met:
● There is significant hand acceleration
● The human has forced the robot to move its hand
4. Folds back the arm
Receiving an Object from a Human
● Initial simple process is similar to giving:
1. Holds out the robot's empty hand with an open
gripper
2. Uses text-to-speech to ask for the object
3. Grasps the object when the previous conditions
are meet (accelerometer or hand being forced
back)
● This worked relatively well, but failed in some
common cases
Receiving an Object from a Human
● Most common failure: people did not actually push the object
into the hand, just into the gripper
– The solution was to use the forearm camera to detect when
significant motion occurred in front of the hand
● Despite this, sometimes the gripper closes when the object
is not present or the gripper slips off the object
– The solution to this problem was to ensure that the gripper did not
fully close (indicating no object), and if it did, try the whole process
again
Coffee Grasping and Return Video
● Observe money passing and coffee grasping
process
● Returning to our lab with the coffee by doing
all the previous navigation steps backwards
● Giving the coffee to the faculty advisor
Lessons Learned
●
It is extremely fun and exciting to do this sort of work
●
It is impractical in the current state – the robot is too slow, requires too
much attention and is too expensive
– However, it could still be helpful for disabled people or for artistic purposes
● Computer vision can be useful but is often unreliable
● Simple heuristic approaches often exceed the performance and especially
the reliability of complex math-heavy algorithms
– In addition, such approaches are more predictable and easier to understand and
thus far easier to maintain
● Don't be afraid to fail, don't be afraid to retry
– Your robot is not going to work every time – it gets exponentially harder to increase
reliability as you go up from 90% to 99%, 99.9% etc.
●
A better approach is to detect when the robot fails and simply try again
Future Work
● Robotics software keeps being reinvented to perform the
same tasks with different robots
– Need robust development model that allows sharing of code
and building off each coder's work
– Critical because of the large amount of software needed for
personal robotics
● I want to develop a next-generation approach to handling
this problem
– Develop an easy-to-use framework for specifying what each
robotic software application does so multiple applications can
be automatically integrated
Questions
Tony Pratkanis
sr.stanford.edu

More Related Content

PDF
Body: An Introduction
PPTX
Rosenthal m qs-wo-cn
PDF
Robert Rich on "Ambient Music"
PDF
Shamit Kachru on "Extra dimensions" at a LASER
PPT
Creativity is a why not a what
PDF
Neuroplasticity
PDF
A Visual History of the Visual Arts - Part 3: "The Age of Globalization"
PPT
Living and Learning with the Brain in Mind
Body: An Introduction
Rosenthal m qs-wo-cn
Robert Rich on "Ambient Music"
Shamit Kachru on "Extra dimensions" at a LASER
Creativity is a why not a what
Neuroplasticity
A Visual History of the Visual Arts - Part 3: "The Age of Globalization"
Living and Learning with the Brain in Mind

Viewers also liked (6)

PDF
The Brain
PPT
The Singularity and you
PDF
Selfies, Surveillance and the Voluntary Panopticon
PDF
A Herstory of Women
PDF
Hype vs. Reality: The AI Explainer
PDF
How to Become a Thought Leader in Your Niche
The Brain
The Singularity and you
Selfies, Surveillance and the Voluntary Panopticon
A Herstory of Women
Hype vs. Reality: The AI Explainer
How to Become a Thought Leader in Your Niche
Ad

Similar to Tony Pratkanis (Stanford Univ) on Robotics (20)

PDF
Ted x aveiro presentation 19
PPTX
Robots in automobile industry
PPTX
Robots in automobile industry
PDF
robotsinautomobileindustry-160311092216.pdf
PPT
Robotics and machine vision system
PDF
roboticsandmachinevisionsystem-161010040756.pdf
PPTX
rae_rob1_introducing_robotics_learner.pptx
PPTX
RoboticsAndMechatronics_IndustrialRobotics_part5_2021.pptx
PDF
Industrial Robotics in manufacturing industry
PPTX
introduction to Robotics (the role of computer science)
PPTX
Material handling robots
PDF
ROBOTICS-LATEST-PPT.pdf
PPT
Robotics
PDF
IRJET- Robotic Vehicle Movement and Arm Control Through Hand Gestures using A...
PDF
IRJET - Design and Investigation of End Effector Possessor for Robotic Limb
PPTX
Personal robot sk
PPT
Personal Robotics Program Fund Fundraising Deck from 2006
PPTX
Robots - Getting out of factories and into fields
PPTX
Robotics
PPTX
Robots and Technology
Ted x aveiro presentation 19
Robots in automobile industry
Robots in automobile industry
robotsinautomobileindustry-160311092216.pdf
Robotics and machine vision system
roboticsandmachinevisionsystem-161010040756.pdf
rae_rob1_introducing_robotics_learner.pptx
RoboticsAndMechatronics_IndustrialRobotics_part5_2021.pptx
Industrial Robotics in manufacturing industry
introduction to Robotics (the role of computer science)
Material handling robots
ROBOTICS-LATEST-PPT.pdf
Robotics
IRJET- Robotic Vehicle Movement and Arm Control Through Hand Gestures using A...
IRJET - Design and Investigation of End Effector Possessor for Robotic Limb
Personal robot sk
Personal Robotics Program Fund Fundraising Deck from 2006
Robots - Getting out of factories and into fields
Robotics
Robots and Technology
Ad

More from piero scaruffi (20)

PDF
Art & Music in the Sixties
PDF
When an artificial intelligence makes art, is it still art?
PDF
Intelligence is not Artificial - Stanford, June 2016
PDF
The Best Kept Secret in Silicon Valley
PDF
Consciousness, Self, Free Will
PDF
Machine Intelligence & Physics
PDF
Language, Dreams, Emotions
PDF
Philosophy of Mind & Cognitive Psychology
PDF
Birgitta Whaley (Berkeley Quantum Computation) at a LASER http://www.scaruffi...
PDF
Tami Spector on "The Molecular Elusive"
PDF
From Cosmology to Neuroscience to Rock Music and back
PDF
Five Reasons why the Singularity is not coming any time soon
PDF
Artificial intelligence and the Singularity - History, Trends and Reality Check
PDF
History of Thought - Part 6: The Modern Age
PDF
History of Thought - Part 5: The Victorian Age
PDF
History of Thought - Part 4 from the Renaissance to the Industrial REvolution
PDF
History of Thought - Part 3 - From Rome to the Middle Ages
PDF
History of Thought - Part 2 - The Ancient Eastern World
Art & Music in the Sixties
When an artificial intelligence makes art, is it still art?
Intelligence is not Artificial - Stanford, June 2016
The Best Kept Secret in Silicon Valley
Consciousness, Self, Free Will
Machine Intelligence & Physics
Language, Dreams, Emotions
Philosophy of Mind & Cognitive Psychology
Birgitta Whaley (Berkeley Quantum Computation) at a LASER http://www.scaruffi...
Tami Spector on "The Molecular Elusive"
From Cosmology to Neuroscience to Rock Music and back
Five Reasons why the Singularity is not coming any time soon
Artificial intelligence and the Singularity - History, Trends and Reality Check
History of Thought - Part 6: The Modern Age
History of Thought - Part 5: The Victorian Age
History of Thought - Part 4 from the Renaissance to the Industrial REvolution
History of Thought - Part 3 - From Rome to the Middle Ages
History of Thought - Part 2 - The Ancient Eastern World

Recently uploaded (20)

PDF
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
PDF
Assigned Numbers - 2025 - Bluetooth® Document
PPTX
TLE Review Electricity (Electricity).pptx
PDF
From MVP to Full-Scale Product A Startup’s Software Journey.pdf
PDF
Getting Started with Data Integration: FME Form 101
PPTX
Tartificialntelligence_presentation.pptx
PDF
Hindi spoken digit analysis for native and non-native speakers
PDF
NewMind AI Weekly Chronicles - August'25-Week II
PPTX
OMC Textile Division Presentation 2021.pptx
PDF
A novel scalable deep ensemble learning framework for big data classification...
PPTX
cloud_computing_Infrastucture_as_cloud_p
PDF
1 - Historical Antecedents, Social Consideration.pdf
PDF
STKI Israel Market Study 2025 version august
PPT
Module 1.ppt Iot fundamentals and Architecture
PDF
WOOl fibre morphology and structure.pdf for textiles
PPTX
Group 1 Presentation -Planning and Decision Making .pptx
PDF
project resource management chapter-09.pdf
PDF
Architecture types and enterprise applications.pdf
PPTX
Final SEM Unit 1 for mit wpu at pune .pptx
PPTX
O2C Customer Invoices to Receipt V15A.pptx
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
Assigned Numbers - 2025 - Bluetooth® Document
TLE Review Electricity (Electricity).pptx
From MVP to Full-Scale Product A Startup’s Software Journey.pdf
Getting Started with Data Integration: FME Form 101
Tartificialntelligence_presentation.pptx
Hindi spoken digit analysis for native and non-native speakers
NewMind AI Weekly Chronicles - August'25-Week II
OMC Textile Division Presentation 2021.pptx
A novel scalable deep ensemble learning framework for big data classification...
cloud_computing_Infrastucture_as_cloud_p
1 - Historical Antecedents, Social Consideration.pdf
STKI Israel Market Study 2025 version august
Module 1.ppt Iot fundamentals and Architecture
WOOl fibre morphology and structure.pdf for textiles
Group 1 Presentation -Planning and Decision Making .pptx
project resource management chapter-09.pdf
Architecture types and enterprise applications.pdf
Final SEM Unit 1 for mit wpu at pune .pptx
O2C Customer Invoices to Receipt V15A.pptx

Tony Pratkanis (Stanford Univ) on Robotics

  • 1. Replacing the Office Intern: An Autonomous Coffee Run with a Mobile Robot Tony Pratkanis
  • 2. Outline of the Talk ● General Background ● Coffee Grasping Steps – Navigation – Doors – Elevators – Object Passing ● Lessons Learned
  • 3. About the Salisbury Robotics Lab ● sr.stanford.edu
  • 4. Personal Robotics: The PR2 ● Based on the PR1 at Salisbury Lab ● Spun out to Willow Garage to become PR2
  • 5. Personal Robotics: The PR2 ● Costs $400,000+, weighs 400 pounds – More battery capacity than a Prius – Two laser scanners, many color cameras, Kinect-like depth cameras, two arms, etc. – Despite this, it is still human-safe ● The PR2 is a “Kitchen Sink” robot – Designed exclusively for research purposes – It has a vast (likely excessive) number of sensors and features ● Ships integrated with ROS – An open-source robotics middleware developed by Willow Garage – Vast amounts of useful software including motion planning, navigation, SLAM, computer vision, 3D object recognition, linear algebra, etc.
  • 7. Personal Robotics ● Personal robotics is the creation of robots that live and play safely and effectively in human-centric environments – The ideal is “Rosie” from The Jetsons ● Faces many challenges not present in other forms of robotics – Extremely diverse obstacles and objectives – Highly unpredictable and unstructured environment – Safety issues ● My solution to these challenges: – Analyze the nature of the task and the available information – Develop simple procedures that exploit that information ● The “coffee bot” allows us to demonstrate this approach
  • 8. Important Qualifications ● The robot must be fully autonomous – No human intervention except for interaction with coffee shop employees ● The environment must be unmodified – Modification of human environments is often socially and politically intractable – Defeats the purpose of building such a robot
  • 9. Navigation Video ● Navigation basic demo ● Note how the robot intelligently avoids both static obstacles and people
  • 10. How Mapping and Navigation Work ● Two sensors are used – Wheel odometry ● Very accurate over short distances ● Error builds up – Laser scanners ● Accurately (to approximately 1cm) measure distance to objects ● Integrated by software to create a detailed map of the environment – “SLAM” - Simultaneous localization and mapping ● Then the map is used for navigation
  • 11. Navigation and Obstacle Avoidance ● Laser data, the map, and odometry are fused for localization – Particle-filter based approach – Obtains the position of the robot ● A cost-map grid is built of all obstacles – Real-time updates of the obstacle grid – Fed to path-finding algorithms ● The navigation software was modified to handle multiple floors – Leading to “multi_map_navigation”
  • 12. Door Pushing Video ● Open the door by pushing ● Note how the robot lines up in all three axes then rotates to open the door
  • 13. Pushing Doors ● These specific doors are challenging from two perspectives – Transparent and thus hard to detect – Heavy and thus physically hard to open ● The PR2 uses mechanical approaches to detection instead of vision ● Uses the entire body and strength of the robot to overcome the doors
  • 14. ● The robot uses the mapping and navigation software to approximately (<30 cm) locate the door ● Next, it uses the tilting laser scanner to line up with the door: – Travels to the correct distance from the door – Aims laser at the base of the door ● Lines up rotationally using the base of the door – Aims laser at middle of the door ● The central window of the door leaves a gap in the laser data compared to the metal sides – the robot centers this gap to align horizontally with the door ● Then, it spins around and backs through the door – Backing through the door is important to hit the door metal bar Pushing Doors
  • 15. Driving and Door Pulling Video ● Drive to the next door ● The lasers were negatively impacted by the sun, requiring adjustments to software filters ● Note how the door is pulled open by the robot
  • 16. Pulling Doors ● Detecting the handles of transparent doors is difficult – The background is unpredictable because of the window – The window reflects the handle, leading to multiple images – The handle itself is shiny, leading to unpredictable coloration and edge structure
  • 17. Pulling Doors ● Solution: Purely mechanical approach to handle detection and door opening – Once again, the robot uses the map to know the approximate location of the door – Drives up to the door and does a “waggle dance” to align with it mechanically – Backs up and slides the hand across the door to find the handle – Grabs the handle and moves so that the handle is at a fixed position with respect to the robot – Dances the door opening dance to open the door
  • 18. Elevator Video ● Note: – Avoidance of people on the way to and inside the elevator – Elevator operation by finding call and control panel buttons – Exiting the elevator on the correct floor
  • 19. Elevator Overview ● Call the elevator by pushing the call button – Find the call button and press it ● Wait for the elevator to arrive – Identify the correct elevator (up or down) ● Enter the elevator – Avoid humans and obstacles in the elevator ● Push the buttons in the elevator control panel – Elevator interior poses challenging computer vision problem ● Exit the elevator – Check to ensure the correct floor
  • 20. Pushing the Elevator Call Buttons ● PR2 knows the approximate location of elevator buttons due to navigation and map ● Lines up with the wall using the laser scanner ● Finds the button using vision ● Repeats this process if the elevator does not arrive
  • 21. Waiting for the Elevator to Arrive ● Scans the indicator lights for elevator arrival – Identifies correct direction (up or down) and elevator (left or right) ● Moves quickly to the elevator before door closes – Avoids humans using the laser – Rule: Only rides in empty elevators
  • 22. Pushing the Elevator Control Buttons ● Uses mechanical procedure to align with buttons – Similar to door pulling ● Once alignment is achieved, the buttons are at a fixed position with respect to the robot – With the known position and height, it is easy to press the correct floor ● If the elevator door fails to open, then the robot will repeat this process
  • 23. Exiting the Elevator ● Waits and uses laser to detect when the door opens ● Checks for correct floor at exit – Important because humans may have ordered the elevator to stop at additional floors
  • 24. Exiting the Elevator ● Determining the correct floor was very challenging – First approach using vision to see the floor number sign at exit too prone to failure ● Not robust to lighting changes – Second approach using robot's accelerometer much more successful ● Detecting time interval between elevator start and stop accurately predicts number of floors traveled ● Much more robust
  • 25. Driving to the Coffee Shop ● More door pulling and pushing ● Note the avoidance of the tables and people
  • 26. Waiting in the Coffee Shop Line ● The robot drives along a predefined course using its map – If the laser detects a person in line, the robot stops and only advances as the person moves forward ● While this approach works well for many stores, it would not work in larger stores that have multiple cashiers
  • 27. Waiting in Line Video ● Note how the robot drives this course
  • 28. Ordering and Obtaining the Coffee ● First, give the barista the written coffee order and cash payment ● Second, take the coffee and place it in the cup holder ● This requires an intuitive approach to enabling humans and robots to pass objects to each other
  • 29. How Do Humans Pass Objects? ● Humans pass objects by two main approaches: 1. Receiver holds out his/her hand, giver places object into hand 2. Giver holds out object, receiver grabs object ● Thus, by using Case 1 to receive objects and Case 2 to give objects, the robot never needs to find the object or the human's hand: it can just hold out its hand or the object ● Humans are also very good at knowing when to let go of objects – Humans hold onto an object until they feel the other person pull the object back from the hand, ensuring a good grip
  • 30. Giving an Object to a Human ● Object giving sequence used by PR2: 1. Holds out the robot's hand with the object 2. Uses text-to-speech to tell the person to take the object 3. Releases the object when either of two conditions is met: ● There is significant hand acceleration ● The human has forced the robot to move its hand 4. Folds back the arm
  • 31. Receiving an Object from a Human ● Initial simple process is similar to giving: 1. Holds out the robot's empty hand with an open gripper 2. Uses text-to-speech to ask for the object 3. Grasps the object when the previous conditions are meet (accelerometer or hand being forced back) ● This worked relatively well, but failed in some common cases
  • 32. Receiving an Object from a Human ● Most common failure: people did not actually push the object into the hand, just into the gripper – The solution was to use the forearm camera to detect when significant motion occurred in front of the hand ● Despite this, sometimes the gripper closes when the object is not present or the gripper slips off the object – The solution to this problem was to ensure that the gripper did not fully close (indicating no object), and if it did, try the whole process again
  • 33. Coffee Grasping and Return Video ● Observe money passing and coffee grasping process ● Returning to our lab with the coffee by doing all the previous navigation steps backwards ● Giving the coffee to the faculty advisor
  • 34. Lessons Learned ● It is extremely fun and exciting to do this sort of work ● It is impractical in the current state – the robot is too slow, requires too much attention and is too expensive – However, it could still be helpful for disabled people or for artistic purposes ● Computer vision can be useful but is often unreliable ● Simple heuristic approaches often exceed the performance and especially the reliability of complex math-heavy algorithms – In addition, such approaches are more predictable and easier to understand and thus far easier to maintain ● Don't be afraid to fail, don't be afraid to retry – Your robot is not going to work every time – it gets exponentially harder to increase reliability as you go up from 90% to 99%, 99.9% etc. ● A better approach is to detect when the robot fails and simply try again
  • 35. Future Work ● Robotics software keeps being reinvented to perform the same tasks with different robots – Need robust development model that allows sharing of code and building off each coder's work – Critical because of the large amount of software needed for personal robotics ● I want to develop a next-generation approach to handling this problem – Develop an easy-to-use framework for specifying what each robotic software application does so multiple applications can be automatically integrated