SlideShare a Scribd company logo
AI on Edge Devices
Experiences building a Smart Security Camera
@markawest
Who Am I?
@markawest
Who Am I?
@markawest
• IoT Hobbyist.
Who Am I?
@markawest
• IoT Hobbyist.
• Data Scientist Manager.
Who Am I?
• IoT Hobbyist.
• Data Scientist Manager.
• Leader javaBin (Norwegian Java User Group).
@markawest
Agenda
Motivation
Pi Zero
Camera
Adding
AWS
The
Movidius
NCS
Conclusion
@markawest
Motivation
Motivation
Pi Zero
Camera
Adding
AWS
The
Movidius
NCS
Conclusion
@markawest
Motivation
@markawest
Requirements
@markawest
Functional
• Monitor activity in the garden.
• Send warning when activity
detected.
• Live video stream.
Non-functional
• In place as soon as possible.
• Low cost.
• Portable.
Pi Zero Camera
Motivation
Pi Zero
Camera
Adding
AWS
The
Movidius
NCS
Conclusion
@markawest
Hardware
@markawest
Software : Motion
@markawest
• Open source motion detection software.
• Excellent performance on the Raspberry Pi Zero.
• Built-in Web Server for streaming video.
• Detected activity or ‘motion’ triggers events.
How the Motion Software works
1 2 3 4
1 2 3
How Motion Works
@markawest
How the Motion Software works
Example Alert
Email
@markawest
Example False Alarms from Motion
@markawest
cat
cloud
Adding AWS to the Camera
Motivation
Pi Zero
Camera
Adding
AWS
The
Movidius
NCS
Conclusion
@markawest
Adding a Smart Filter
@markawest
cat
person
cat
AWS Rekognition
@markawest
cat
person
cat
• Image Analysis as a Service, offering a range of API’s.
• Built upon Deep Neural Networks.
• Many alternatives: Google Vision, Microsoft Computer
Vision, Clarafai.
AWS Rekognition Demo
@markawest
cat
person
cat
3. Trigger warning
email (if snapshot
contains a person)
2. Snapshot
analysed via AWS
Rekognition
1. Camera pushes
snapshot to AWS
4. Email alert sent
(with snapshot)
@markawest
Adding AWS Rekognition to the Camera
AWS IAM
AWS
Rekognition
AWS Simple
Email Service
AWS S3
(storage)
Smart Camera AWS Pipeline
AWS Step Function
(workflow)
Upload
Trigger
1
2
4
5
6
calls
3
uses
uses
@markawest
Upload
Trigger
AWS IAM
AWS
Rekognition
AWS S3
(storage)
AWS Lambda Functions
AWS Step Function
(workflow)
Upload
Trigger calls
uses
uses
@markawest
Upload
Trigger
AWS Simple
Email Service
Smart Camera with AWS Demo
@markawest
cat
person
cat
Smart Camera with AWS Evaluation
Positives
• Reduced False Positive emails.
• AWS pipeline is «on demand», scalable
and flexible.
• Low cost of project startup and
experimentation.
• Satisfies project requirements.
Negatives
• Result is only as good as the snapshots
generated by Motion.
• Monthly cost grows after first year.
• AWS Rekognition doesn’t cope well with
«noisy» environments.
• AWS Rekognition is a «Black Box».
@markawest
The Movidius NCS
Motivation
Pi Zero
Camera
Adding
AWS
The
Movidius
NCS
Conclusion
@markawest
Introducing the Intel Movidius NCS
• USB stick for speeding up Deep Learning
inference on constrained devices.
• Contains a low power, high performance VPU.
• Supports Caffe and TensorFlow models.
@markawest
Hypothesis : Inference on The Edge
• Use the Movidius NCS for on-board image analysis and filtering.
• Cut out AWS and The Cloud!
• The Movidius NCS costs around 1000 NOK.
• Potential savings (per Camera) of 80-150 NOK per month!
@markawest
Movidius NCS Work Flow
@markawest
Secure a pre-
trained Deep
Learning Model
Step One
NCS API SDKMobilenet-SSD
(20 categories)
Movidius NCS Work Flow
@markawest
Secure a pre-
trained Deep
Learning Model
Compile Model to
a Graph file for
use with the NCS.
Step One Step Two
NCS Tools SDKMobilenet-SSD
(20 categories)
Movidius NCS Work Flow
@markawest
Secure a pre-
trained Deep
Learning Model
Compile Model to
a Graph file for
use with the NCS.
Deploy Graph file
to NCS and start
inferring.
Step One Step Two Step Three
NCS Tools SDK NCS API SDKMobilenet-SSD
(20 categories)
Real-time Video Annotation
@markawest
NCS API SDK
1. Fetch a single
frame from the video
stream (web camera)
2. Pre-process the
frame and forward it
to the Movidius NCS
3. Infer objects in frame
using the Graph file and
return results to RaspPi
4. Annotate the
frame based on
inference results
5. Display the frame
in the Raspian
Desktop
Movidius NCS Demo
@markawest
cat
person
cat
IoT Meetup Oslo - AI on Edge Devices
Movidius NCS Performance Analysis
@markawest
Pi Camera
(threaded*)
USB Camera
Pi 3 B+ 4.2 FPS 4.48 FPS
Pi Zero 1.5 FPS 1 FPS
Pi Zero (headless**) 2.5 FPS 1.5 FPS
* Image I/O moved into seperate thread from main processing.
** Booting directly to the command line (no Raspian desktop).
Movidius NCS Evaluation
Positives
• Every single frame is processed.
• No ongoing costs.
• Bring your own model.
• Faster than Rasp Pi + OpenCV alone.
Negatives
• Slow performance on Pi Zero.
• No pipeline, less scalable.
• Potentially higher hardware cost.
• Future for the Movidius NCS?
@markawest
Evaluation
Motivation
Pi Zero
Camera
Adding
AWS
The
Movidius
NCS
Conclusion
@markawest
@markawest
There is more than one way to skin a cat!
• AI enabled IoT is available at The Edge,
but requires extra hardware, raising the
cost (and complexity) per unit.
• Adding AI to IoT via The Cloud creates
an ongoing cost, but can also provide
valuable infrastructure.
• Combinations of the above are also
possible!
Google Edge TPU Accelerator
- Debian Linux, TensorFlow Lite.
- 65mm x 30mm.
- Compatible with Raspberry Pi.
- Coming soon (autumn 2018)!
https://aiyprojects.withgoogle.com/edge-tpu
@markawest
“Player Two has entered the game!!”
Thanks for
listening!
@markawest
Further Reading
1. Pi Zero Camera with Motion
2. Adding AWS to the Pi Zero Camera Part 1
3. Adding AWS to the Pi Zero Camera Part 2
4. Movidius NCS Quick Start
5. Using the Movidius NCS with the Pi Camera
6. Using the Movidius NCS with the Pi Zero
@markawest

More Related Content

PDF
Spinnaker Microsrvices
PPTX
AnsibleFest 2019 - Greenfielding Network and Systems Automation in a Large an...
PDF
Serverless On Stage - Serverless URL Shortener
PPTX
Building a Running App With react-native
PPTX
DON'T PANIC: GETTING YOUR INFRASTRUCTURE DRIFT UNDER CONTROL, ERAN BIBI, Firefly
PPTX
Linnworks Roadmap: The Future of Linnworks
PPTX
Disrupting the Storage Industry talk at SNIA Data Storage Innovation Conference
PPTX
AWS Summit New York Recap 2016
Spinnaker Microsrvices
AnsibleFest 2019 - Greenfielding Network and Systems Automation in a Large an...
Serverless On Stage - Serverless URL Shortener
Building a Running App With react-native
DON'T PANIC: GETTING YOUR INFRASTRUCTURE DRIFT UNDER CONTROL, ERAN BIBI, Firefly
Linnworks Roadmap: The Future of Linnworks
Disrupting the Storage Industry talk at SNIA Data Storage Innovation Conference
AWS Summit New York Recap 2016

What's hot (20)

PPTX
Cloudstack container service
PDF
How azure ml service streamlines cloud based machine learning
PPTX
JustLetMeCode-Final
PPTX
Migrating to the serverless mindset
PPTX
Performance Monitoring with AOP and Amazon CloudWatch
PDF
Cybera - Clouds & other computational frameworks for science
PDF
Azure web functions little bites of services
PDF
From vagrant to production - Mark Eijsermans
PDF
20161103 Serverless Italy Meetup
PDF
Let there be light
PDF
Tracing Java Applications on Azure
PDF
Transforming Enterprise Release Management in Elastic Beanstalk using Jenkins...
PDF
Why Not Public Cloud?
PPTX
Building cloud native apps
PPTX
Get your head in the clouds! - Swansea Con 2016
PDF
Big Data and OpenStack, a Love Story: Michael Still, Rackspace
ODP
Continuous Delivery with Spinnaker.io
PDF
JAX 2014 - The PaaS to a better IT architecture.
PDF
Saturn 2014. Engineering Velocity: Continuous Delivery at Netflix
PPTX
Dev day serverless from a devs perspective
Cloudstack container service
How azure ml service streamlines cloud based machine learning
JustLetMeCode-Final
Migrating to the serverless mindset
Performance Monitoring with AOP and Amazon CloudWatch
Cybera - Clouds & other computational frameworks for science
Azure web functions little bites of services
From vagrant to production - Mark Eijsermans
20161103 Serverless Italy Meetup
Let there be light
Tracing Java Applications on Azure
Transforming Enterprise Release Management in Elastic Beanstalk using Jenkins...
Why Not Public Cloud?
Building cloud native apps
Get your head in the clouds! - Swansea Con 2016
Big Data and OpenStack, a Love Story: Michael Still, Rackspace
Continuous Delivery with Spinnaker.io
JAX 2014 - The PaaS to a better IT architecture.
Saturn 2014. Engineering Velocity: Continuous Delivery at Netflix
Dev day serverless from a devs perspective
Ad

Similar to IoT Meetup Oslo - AI on Edge Devices (20)

PPTX
NTNU Tech Talks : Smartening up a Pi Zero Security Camera with Amazon Web Ser...
PDF
Make Data Smart Again 2018 - Building a Smart Security Camera with Raspberry ...
PPTX
GeeCon 2017 : Building a Smart Security Camera with Raspberry Pi Zero, Node.j...
PPTX
Riga Dev Days: Building a Smart Security Camera with Raspberry Pi Zero, Node....
PPTX
JavaZone 2017 : Building a smart security camera with raspberry pi zero, java...
PPTX
DevExperience 2018 : Building a Smart Security Camera with Raspberry Pi Zero,...
PDF
Iot meets Serverless
PDF
IoT Tech Day Smart Camera slides. Utrecht, April 2017.
PPTX
GeeCON Prague : Building a Smart Security Camera with Raspberry Pi Zero, Java...
PPTX
Scalable Open-Source IoT Solutions on Microsoft Azure
PDF
DevOpsCon 2015 - DevOps in Mobile Games
PDF
Netflix oss season 2 episode 1 - meetup Lightning talks
PPTX
IoTSummit: Create iot devices connected or on the edge using ai and ml
PDF
AWS Summit Seoul 2015 - 일본 AWS 게임 고객사례 - Gungho, Grani, Nintendo를 중심으로
PDF
Respawn 2015: Chimera Entertainment - A decade of game development
PPTX
Azure iot edge and AI enabling the intelligent edge
PDF
Edge computing in practice using IoT, Tensorflow and Google Cloud
PPTX
Sundance's presentation at B:RAI 2020
PPTX
Yow Conference Dec 2013 Netflix Workshop Slides with Notes
PPTX
Fish Cam.pptx
NTNU Tech Talks : Smartening up a Pi Zero Security Camera with Amazon Web Ser...
Make Data Smart Again 2018 - Building a Smart Security Camera with Raspberry ...
GeeCon 2017 : Building a Smart Security Camera with Raspberry Pi Zero, Node.j...
Riga Dev Days: Building a Smart Security Camera with Raspberry Pi Zero, Node....
JavaZone 2017 : Building a smart security camera with raspberry pi zero, java...
DevExperience 2018 : Building a Smart Security Camera with Raspberry Pi Zero,...
Iot meets Serverless
IoT Tech Day Smart Camera slides. Utrecht, April 2017.
GeeCON Prague : Building a Smart Security Camera with Raspberry Pi Zero, Java...
Scalable Open-Source IoT Solutions on Microsoft Azure
DevOpsCon 2015 - DevOps in Mobile Games
Netflix oss season 2 episode 1 - meetup Lightning talks
IoTSummit: Create iot devices connected or on the edge using ai and ml
AWS Summit Seoul 2015 - 일본 AWS 게임 고객사례 - Gungho, Grani, Nintendo를 중심으로
Respawn 2015: Chimera Entertainment - A decade of game development
Azure iot edge and AI enabling the intelligent edge
Edge computing in practice using IoT, Tensorflow and Google Cloud
Sundance's presentation at B:RAI 2020
Yow Conference Dec 2013 Netflix Workshop Slides with Notes
Fish Cam.pptx
Ad

More from Mark West (10)

PPTX
A Practical-ish Introduction to Data Science
PPTX
Explaining the new Java release and licensing models
PPTX
GeeCon Prague 2018 - A Practical-ish Introduction to Data Science
PPTX
JavaZone 2018 - A Practical(ish) Introduction to Data Science
PPTX
NDC Oslo : A Practical Introduction to Data Science
PDF
JavaZone 2016 : MQTT and CoAP for the Java Developer
PPTX
JavaZone 2015 : NodeBots - JavaScript Powered Robots with Johnny-Five
PDF
Coding Mojo : Node.js Meetup
PDF
IoT Tech Day Coding Mojo slides. Utrecht, April 2016
PDF
JavaOne 2015 : How I Rediscovered My Coding Mojo by Building an IoT/Robotics ...
A Practical-ish Introduction to Data Science
Explaining the new Java release and licensing models
GeeCon Prague 2018 - A Practical-ish Introduction to Data Science
JavaZone 2018 - A Practical(ish) Introduction to Data Science
NDC Oslo : A Practical Introduction to Data Science
JavaZone 2016 : MQTT and CoAP for the Java Developer
JavaZone 2015 : NodeBots - JavaScript Powered Robots with Johnny-Five
Coding Mojo : Node.js Meetup
IoT Tech Day Coding Mojo slides. Utrecht, April 2016
JavaOne 2015 : How I Rediscovered My Coding Mojo by Building an IoT/Robotics ...

Recently uploaded (20)

PPTX
Big Data Technologies - Introduction.pptx
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PPTX
Spectroscopy.pptx food analysis technology
PPTX
sap open course for s4hana steps from ECC to s4
PDF
gpt5_lecture_notes_comprehensive_20250812015547.pdf
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PPTX
Machine Learning_overview_presentation.pptx
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
DOCX
The AUB Centre for AI in Media Proposal.docx
PDF
Unlocking AI with Model Context Protocol (MCP)
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
Empathic Computing: Creating Shared Understanding
PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PPT
Teaching material agriculture food technology
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PDF
cuic standard and advanced reporting.pdf
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
Big Data Technologies - Introduction.pptx
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
Spectroscopy.pptx food analysis technology
sap open course for s4hana steps from ECC to s4
gpt5_lecture_notes_comprehensive_20250812015547.pdf
The Rise and Fall of 3GPP – Time for a Sabbatical?
Machine Learning_overview_presentation.pptx
Advanced methodologies resolving dimensionality complications for autism neur...
The AUB Centre for AI in Media Proposal.docx
Unlocking AI with Model Context Protocol (MCP)
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Empathic Computing: Creating Shared Understanding
Network Security Unit 5.pdf for BCA BBA.
Dropbox Q2 2025 Financial Results & Investor Presentation
Teaching material agriculture food technology
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
Chapter 3 Spatial Domain Image Processing.pdf
cuic standard and advanced reporting.pdf
Encapsulation_ Review paper, used for researhc scholars
Reach Out and Touch Someone: Haptics and Empathic Computing

IoT Meetup Oslo - AI on Edge Devices

Editor's Notes

  • #3: But first, who the devil am I? As you can see from my twitter handle my name is Mark West, and I’m an English living here in Oslo, Norway.
  • #4: Speaking for me is a hobby that I do to learn and share my own knowledge and experiences. In the past couple of years I have spoken at a range of conference across Europe and the US. The good news is that this is the first time I have spoken at NDC. This is also the first time I have given this specific talk so I am excited to hear your feedback. So lets get started!
  • #5: Speaking for me is a hobby that I do to learn and share my own knowledge and experiences. In the past couple of years I have spoken at a range of conference across Europe and the US. The good news is that this is the first time I have spoken at NDC. This is also the first time I have given this specific talk so I am excited to hear your feedback. So lets get started!
  • #6: Speaking for me is a hobby that I do to learn and share my own knowledge and experiences. In the past couple of years I have spoken at a range of conference across Europe and the US. The good news is that this is the first time I have spoken at NDC. This is also the first time I have given this specific talk so I am excited to hear your feedback. So lets get started!
  • #7: Here is the Agenda for my talk. As you can see it is split into four sections.
  • #8: Here is the Agenda for my talk. As you can see it is split into four sections.
  • #11: Here is the Agenda for my talk. As you can see it is split into four sections.
  • #14: So how does Motion work? Well it basically monitors the video stream from the camera. Each frame is compared to the previous, in order to find out how many pixels (if any) differ. If the total number of changed pixels is greater than a given threshold, a motion alarm is then triggered.
  • #15: So how does Motion work? Well it basically monitors the video stream from the camera. Each frame is compared to the previous, in order to find out how many pixels (if any) differ. If the total number of changed pixels is greater than a given threshold, a motion alarm is then triggered.
  • #16: So how does Motion work? Well it basically monitors the video stream from the camera. Each frame is compared to the previous, in order to find out how many pixels (if any) differ. If the total number of changed pixels is greater than a given threshold, a motion alarm is then triggered.
  • #17: Here is the Agenda for my talk. As you can see it is split into four sections.
  • #18: So how does Motion work? Well it basically monitors the video stream from the camera. Each frame is compared to the previous, in order to find out how many pixels (if any) differ. If the total number of changed pixels is greater than a given threshold, a motion alarm is then triggered.
  • #19: So how does Motion work? Well it basically monitors the video stream from the camera. Each frame is compared to the previous, in order to find out how many pixels (if any) differ. If the total number of changed pixels is greater than a given threshold, a motion alarm is then triggered.
  • #20: So how does Motion work? Well it basically monitors the video stream from the camera. Each frame is compared to the previous, in order to find out how many pixels (if any) differ. If the total number of changed pixels is greater than a given threshold, a motion alarm is then triggered.
  • #22: Ok, so how did the AWS processing work? Here’s a simplified run through. Firstly an image is pushed from the PiZero Camera to Amazon’s s3 storage. A small unit of code or Lambda Function is triggered by the upload. It in turn triggers a Step Function. The Step Function orchestrates further Lambda Functions into a workshop. The first Lambda Function makes a call to Rekognition to evaluate the picture. The second Lambda Function uses the Simple Email Service to send the alert email. Finally, all components in the workflow use Identity Access management to make sure that they have access to the components they need to use. For example, the Lambda Function that sends an email needs access to both the Simple Email Service and to s3 in order to attach the image file to the email
  • #23: Ok, so how did the AWS processing work? Here’s a simplified run through. Firstly an image is pushed from the PiZero Camera to Amazon’s s3 storage. A small unit of code or Lambda Function is triggered by the upload. It in turn triggers a Step Function. The Step Function orchestrates further Lambda Functions into a workshop. The first Lambda Function makes a call to Rekognition to evaluate the picture. The second Lambda Function uses the Simple Email Service to send the alert email. Finally, all components in the workflow use Identity Access management to make sure that they have access to the components they need to use. For example, the Lambda Function that sends an email needs access to both the Simple Email Service and to s3 in order to attach the image file to the email
  • #24: So how does Motion work? Well it basically monitors the video stream from the camera. Each frame is compared to the previous, in order to find out how many pixels (if any) differ. If the total number of changed pixels is greater than a given threshold, a motion alarm is then triggered.
  • #25: So how does Motion work? Well it basically monitors the video stream from the camera. Each frame is compared to the previous, in order to find out how many pixels (if any) differ. If the total number of changed pixels is greater than a given threshold, a motion alarm is then triggered.
  • #26: Here is the Agenda for my talk. As you can see it is split into four sections.
  • #29: The SDK Tools allows you to convert your trained Deep Learning Model in to a "graph" that the NCS can understand. This would be done as part of the development process. The SDK API allows you to work with your graph at run-time - loading your graph onto the NCS and then performing inference on data (i.e. real time image analysis).
  • #30: The SDK Tools allows you to convert your trained Deep Learning Model in to a "graph" that the NCS can understand. This would be done as part of the development process. The SDK API allows you to work with your graph at run-time - loading your graph onto the NCS and then performing inference on data (i.e. real time image analysis).
  • #31: The SDK Tools allows you to convert your trained Deep Learning Model in to a "graph" that the NCS can understand. This would be done as part of the development process. The SDK API allows you to work with your graph at run-time - loading your graph onto the NCS and then performing inference on data (i.e. real time image analysis).
  • #32: The SDK Tools allows you to convert your trained Deep Learning Model in to a "graph" that the NCS can understand. This would be done as part of the development process. The SDK API allows you to work with your graph at run-time - loading your graph onto the NCS and then performing inference on data (i.e. real time image analysis).
  • #33: So how does Motion work? Well it basically monitors the video stream from the camera. Each frame is compared to the previous, in order to find out how many pixels (if any) differ. If the total number of changed pixels is greater than a given threshold, a motion alarm is then triggered.
  • #36: So how does Motion work? Well it basically monitors the video stream from the camera. Each frame is compared to the previous, in order to find out how many pixels (if any) differ. If the total number of changed pixels is greater than a given threshold, a motion alarm is then triggered. < 1 FPS with Raspi 3 B+
  • #37: Here is the Agenda for my talk. As you can see it is split into four sections.
  • #39: USB Type-C* (data/power)Dimensions65 mm x 30 mm* Compatible with Raspberry Pi boards at USB 2.0 speeds only. Supported Operating Systems Debian Linux Supported Frameworks TensorFlow Lite
  • #40: So how does Motion work? Well it basically monitors the video stream from the camera. Each frame is compared to the previous, in order to find out how many pixels (if any) differ. If the total number of changed pixels is greater than a given threshold, a motion alarm is then triggered.