SlideShare a Scribd company logo
Hitoshi Kusano*, Ayaka Kume+, Eiichi Matsumoto+, Jethro Tan+


June 2, 2017
*Kyoto University
+Preferred Networks, Inc.
FCN-Based 6D Robotic Grasping

for Arbitrary Placed Objects
※This work is the output of Preferred Networks internship program
Requirement for successful robotic grasping:

Derive configurations of a robot and its end-effector

e.g. Grasp pose, Grasp width, Grasp height, Joint angle
・Traditional approach decomposes grasping process into
several stages, which require many heuristics
・Machine learning based end-to-end approach has emerged
Background
http://www.schunk-modular-robotics.com/
1/9
Complex end-effector Cluttered environment
None of prior methods can predict 6D grasp

Previous Work

~ Machine learning based end-to-end approach ~
Pinto2016 Levine2016
Araki2016 Guo2017
(x, y)height
width
2/9
(x, y, z, roll, pitch, yaw)
Our purpose:
End-to-End learning to grasp arbitrary placed objects

Contribution:
○ Novel data collection strategy to obtain 6D grasp
configurations using a teach tool by human
○ End-to-end CNN model predicting 6D grasp configurations
Purpose and Contribution
(x, y, z, w, p, r)
3/9
● An extension for Fully Convolutional Networks
● Outputs two maps with scores: Location Map for graspability per pixel, and
Configuration Map providing end-effector configurations (z, w, p, r) per pixel
● For Configuration Map, this network classifies valid grasp configurations to
300 classes, NOT regression
Grasp Configuration Network
(x, y, z, w, p, r)
4/9
Location MapConfiguration Map
Data Collection
Simple teach tool Data Collection
We demonstrated 11320 grasps for 7 objects
5/9
Robotic Gripper
https://www.thk.com
X
A. Intel Realsense SR300 RGB-D camera
B. Arbitrary placed object
C. THK TRX-S 3-finger gripper
D. FANUC M-10iA 6 DOF robot arm
Experiment Setup
B
C
D
A
6/9
● Predicted grasp configurations for the same (X,Y) location
Example of predicted grasp configurations
Cap
Bottle
TOP VIEW FRONT VIEW
Grasp Candidate Grasp Candidate
7/9
Known Objects Unknown Objects
Results of robotic experiment
70% 50% 60% 40%
20% 40% 60%
Number under the figure means success rate for 10 trials
60% 20% 20% 40% 30%
8/9
_
System Test
※This video is double speed
9/9
Thank you for listening
and
I hope to talk to you in the interactive session

More Related Content

PDF
Introduction to Chainer
PDF
Introduction to Chainer
PPTX
Chainer v3
PDF
IIBMP2019 講演資料「オープンソースで始める深層学習」
PDF
Introduction to Chainer 11 may,2018
PDF
Intro to TensorFlow and PyTorch Workshop at Tubular Labs
PDF
Distributed implementation of a lstm on spark and tensorflow
PDF
PFN Summer Internship 2019 / Kenshin Abe: Extension of Chainer-Chemistry for ...
Introduction to Chainer
Introduction to Chainer
Chainer v3
IIBMP2019 講演資料「オープンソースで始める深層学習」
Introduction to Chainer 11 may,2018
Intro to TensorFlow and PyTorch Workshop at Tubular Labs
Distributed implementation of a lstm on spark and tensorflow
PFN Summer Internship 2019 / Kenshin Abe: Extension of Chainer-Chemistry for ...

What's hot (20)

PDF
Comparison of deep learning frameworks from a viewpoint of double backpropaga...
PDF
深層学習フレームワーク概要とChainerの事例紹介
PPTX
Deep Learning with TensorFlow: Understanding Tensors, Computations Graphs, Im...
PDF
[251] implementing deep learning using cu dnn
PDF
CUDA and Caffe for deep learning
PDF
Alex Smola, Professor in the Machine Learning Department, Carnegie Mellon Uni...
PDF
Introduction to Neural Networks in Tensorflow
PPTX
TensorFlow Tutorial Part1
PDF
Overview of Chainer and Its Features
PDF
Introduction to Chainer Chemistry
PDF
Deep learning for molecules, introduction to chainer chemistry
PDF
GTC Japan 2016 Chainer feature introduction
PDF
TensorFlow Dev Summit 2018 Extended: TensorFlow Eager Execution
PPTX
Electricity price forecasting with Recurrent Neural Networks
PPTX
Keras on tensorflow in R & Python
PDF
Chainer v2 and future dev plan
PDF
Deep Learning with PyTorch
PPTX
Cloud Computing
PDF
Deep Learning in Python with Tensorflow for Finance
PDF
Slide tesi
Comparison of deep learning frameworks from a viewpoint of double backpropaga...
深層学習フレームワーク概要とChainerの事例紹介
Deep Learning with TensorFlow: Understanding Tensors, Computations Graphs, Im...
[251] implementing deep learning using cu dnn
CUDA and Caffe for deep learning
Alex Smola, Professor in the Machine Learning Department, Carnegie Mellon Uni...
Introduction to Neural Networks in Tensorflow
TensorFlow Tutorial Part1
Overview of Chainer and Its Features
Introduction to Chainer Chemistry
Deep learning for molecules, introduction to chainer chemistry
GTC Japan 2016 Chainer feature introduction
TensorFlow Dev Summit 2018 Extended: TensorFlow Eager Execution
Electricity price forecasting with Recurrent Neural Networks
Keras on tensorflow in R & Python
Chainer v2 and future dev plan
Deep Learning with PyTorch
Cloud Computing
Deep Learning in Python with Tensorflow for Finance
Slide tesi
Ad

Similar to FCN-Based 6D Robotic Grasping for Arbitrary Placed Objects (20)

PDF
Kk3517971799
PDF
PDF
How to Make Hand Detector on Native Activity with OpenCV
PDF
K-Means Clustering in Moving Objects Extraction with Selective Background
PDF
SkyStitch: a Cooperative Multi-UAV-based Real-time Video Surveillance System ...
PDF
Intelligent Auto Horn System Using Artificial Intelligence
PDF
Flow Trajectory Approach for Human Action Recognition
PDF
Foreground algorithms for detection and extraction of an object in multimedia...
PDF
A Three-Dimensional Representation method for Noisy Point Clouds based on Gro...
PDF
PDF
Strategy for Foreground Movement Identification Adaptive to Background Variat...
PDF
Robot Localisation: An Introduction - Luis Contreras 2020.06.09 | RoboCup@Hom...
PDF
Automatic selection of object recognition methods using reinforcement learning
PDF
IRJET- Moving Object Detection using Foreground Detection for Video Surveil...
PDF
Partial Object Detection in Inclined Weather Conditions
PDF
"Separable Convolutions for Efficient Implementation of CNNs and Other Vision...
PDF
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
PDF
Portfolio
PDF
IRJET- Object Detection and Recognition using Single Shot Multi-Box Detector
PDF
A Novel Background Subtraction Algorithm for Dynamic Texture Scenes
Kk3517971799
How to Make Hand Detector on Native Activity with OpenCV
K-Means Clustering in Moving Objects Extraction with Selective Background
SkyStitch: a Cooperative Multi-UAV-based Real-time Video Surveillance System ...
Intelligent Auto Horn System Using Artificial Intelligence
Flow Trajectory Approach for Human Action Recognition
Foreground algorithms for detection and extraction of an object in multimedia...
A Three-Dimensional Representation method for Noisy Point Clouds based on Gro...
Strategy for Foreground Movement Identification Adaptive to Background Variat...
Robot Localisation: An Introduction - Luis Contreras 2020.06.09 | RoboCup@Hom...
Automatic selection of object recognition methods using reinforcement learning
IRJET- Moving Object Detection using Foreground Detection for Video Surveil...
Partial Object Detection in Inclined Weather Conditions
"Separable Convolutions for Efficient Implementation of CNNs and Other Vision...
Pontillo Semanti Code Using Content Similarity And Database Driven Matching T...
Portfolio
IRJET- Object Detection and Recognition using Single Shot Multi-Box Detector
A Novel Background Subtraction Algorithm for Dynamic Texture Scenes
Ad

Recently uploaded (20)

PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PDF
KodekX | Application Modernization Development
PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
cuic standard and advanced reporting.pdf
PDF
Encapsulation_ Review paper, used for researhc scholars
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PPTX
sap open course for s4hana steps from ECC to s4
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PPTX
Spectroscopy.pptx food analysis technology
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
Approach and Philosophy of On baking technology
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
Diabetes mellitus diagnosis method based random forest with bat algorithm
Chapter 3 Spatial Domain Image Processing.pdf
KodekX | Application Modernization Development
Unlocking AI with Model Context Protocol (MCP)
cuic standard and advanced reporting.pdf
Encapsulation_ Review paper, used for researhc scholars
Digital-Transformation-Roadmap-for-Companies.pptx
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
sap open course for s4hana steps from ECC to s4
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Spectroscopy.pptx food analysis technology
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Reach Out and Touch Someone: Haptics and Empathic Computing
Approach and Philosophy of On baking technology

FCN-Based 6D Robotic Grasping for Arbitrary Placed Objects

  • 1. Hitoshi Kusano*, Ayaka Kume+, Eiichi Matsumoto+, Jethro Tan+ 
 June 2, 2017 *Kyoto University +Preferred Networks, Inc. FCN-Based 6D Robotic Grasping
 for Arbitrary Placed Objects ※This work is the output of Preferred Networks internship program
  • 2. Requirement for successful robotic grasping:
 Derive configurations of a robot and its end-effector
 e.g. Grasp pose, Grasp width, Grasp height, Joint angle ・Traditional approach decomposes grasping process into several stages, which require many heuristics ・Machine learning based end-to-end approach has emerged Background http://www.schunk-modular-robotics.com/ 1/9 Complex end-effector Cluttered environment
  • 3. None of prior methods can predict 6D grasp
 Previous Work
 ~ Machine learning based end-to-end approach ~ Pinto2016 Levine2016 Araki2016 Guo2017 (x, y)height width 2/9 (x, y, z, roll, pitch, yaw)
  • 4. Our purpose: End-to-End learning to grasp arbitrary placed objects
 Contribution: ○ Novel data collection strategy to obtain 6D grasp configurations using a teach tool by human ○ End-to-end CNN model predicting 6D grasp configurations Purpose and Contribution (x, y, z, w, p, r) 3/9
  • 5. ● An extension for Fully Convolutional Networks ● Outputs two maps with scores: Location Map for graspability per pixel, and Configuration Map providing end-effector configurations (z, w, p, r) per pixel ● For Configuration Map, this network classifies valid grasp configurations to 300 classes, NOT regression Grasp Configuration Network (x, y, z, w, p, r) 4/9 Location MapConfiguration Map
  • 6. Data Collection Simple teach tool Data Collection We demonstrated 11320 grasps for 7 objects 5/9 Robotic Gripper https://www.thk.com X
  • 7. A. Intel Realsense SR300 RGB-D camera B. Arbitrary placed object C. THK TRX-S 3-finger gripper D. FANUC M-10iA 6 DOF robot arm Experiment Setup B C D A 6/9
  • 8. ● Predicted grasp configurations for the same (X,Y) location Example of predicted grasp configurations Cap Bottle TOP VIEW FRONT VIEW Grasp Candidate Grasp Candidate 7/9
  • 9. Known Objects Unknown Objects Results of robotic experiment 70% 50% 60% 40% 20% 40% 60% Number under the figure means success rate for 10 trials 60% 20% 20% 40% 30% 8/9 _
  • 10. System Test ※This video is double speed 9/9
  • 11. Thank you for listening and I hope to talk to you in the interactive session