SlideShare a Scribd company logo
Alexa, let’s make a
skill
@michaelpeacock
◇ Freelance developer
◇ Consultant CTO
◇ PHPNE organiser
◇ Occasional speaker and author
Our skill
◇ Conference bot
■ What is on in a particular room
■ What is a particular speaker talking about
Alexa Devices
Audio only
◇ Echo
◇ Echo Dot
◇ Echo Plus
Companion App
With a screen
◇ Echo Show
◇ Echo Spot
◇ Fire Tablet
◇ FireTV
FireTV Devices
Echo Show
Alexa Skill Flow
Two sides to skill
development
◇ Developer Console / Interaction Model
◇ Endpoint (your code)
Amazon Developer
Console
Create a new skill
Set a name (this is not what your users will say to invoke your skill)
Select a custom model
Skill builder
Anatomy of a skill
◇ Interaction model
◇ Interfaces
◇ Endpoint
Interaction Model
Interaction model defines how our users will interact with our
skill, and how certain voice commands should map to
different parts of our skill.
Invocation
Skills need to be invoked, to either open the skill, or tell
Alexa that we are wanting a command to processed by a
particular skill.
Setting an invokation name
Slot Types
◇ Before we worry about what our users want to do with
our skill
◇ We need to think about the variables they might want
to pass to us, so we can include them later.
Alexa, lets make a skill
Alexa, lets make a skill
Specify an ID
Built in Slot Types
Intents
Required Intents
Built-in intents
Custom Intents
Create a custom intent
Alexa, lets make a skill
Alexa, lets make a skill
Alexa, lets make a skill
Utterances
Alexa, lets make a skill
Alexa, lets make a skill
Save and build.
Serverless
Installation
npm install -g serverless
Create a project
serverless create --template aws-nodejs --path confoo
Alexa, lets make a skill
Skeleton Project: serverless.yml
Skeleton project: handler.js
Install Alexa SDK
npm install alexa-sdk
Alexa, lets make a skill
Alexa, lets make a skill
Handlers
Alexa, lets make a skill
Alexa, lets make a skill
Alexa, lets make a skill
Invoking a function locally
Authenticate
serverless config credentials 
--provider aws 
--profile confoo 
--key YOURKEY 
--secret YOURSECRET
Set the authentication profile: serverless.yaml
Deploy
serverless deploy
Alexa, lets make a skill
Alexa, lets make a skill
Lambda Testing
Alexa, lets make a skill
Alexa, lets make a skill
Alexa, lets make a skill
Alexa, lets make a skill
Alexa, lets make a skill
NB: Accessing IDs instead of the name
Lambda Trigger
Alexa, lets make a skill
Alexa, lets make a skill
Alexa, lets make a skill
Set the endpoint within the Alexa console
Testing the skill invocation
SSML
Speech Synthesis Markup Language
this.emit(':tell', '<say-as interpret-as="interjection">Oh boy</say-
as><break time="1s"/> this is just an example.');
Sessions
Persist data:
- DynamoDB permission
- alexa.dynamoDBTableName = 'OurSkillData';
- this.attributes[’key'] = ‘value’’;
Of note
◇ Alexa Serverless Plugin
◇ Alexa CLI
Thanks!
Any questions?
You can find me at:
◇ @michaelpeacock
Credits
Special thanks to all the people who made and released
these awesome resources for free:
◇ Presentation template by SlidesCarnival

More Related Content

PDF
Mad development
PDF
Design and Develop Alexa Skills - Codemotion Rome 2019
PPTX
Ansible training | redhat Ansible 2.5 Corporate course - GOT
PDF
DIY Your Amazon Echo
PDF
Intro to Alexa skills development
PPTX
Amazon Alexa and Echo
PDF
Introduction to building alexa skills and putting your amazon echo to work
PPTX
Building Amazon Alexa custom Skill step by step
Mad development
Design and Develop Alexa Skills - Codemotion Rome 2019
Ansible training | redhat Ansible 2.5 Corporate course - GOT
DIY Your Amazon Echo
Intro to Alexa skills development
Amazon Alexa and Echo
Introduction to building alexa skills and putting your amazon echo to work
Building Amazon Alexa custom Skill step by step

Similar to Alexa, lets make a skill (20)

PDF
Digital Muse “Girl Tech Fest - AWS Alexa Skills Coding Workshop
PDF
How to Create a Custom Skill
PPTX
Amazon alexa - building custom skills
PDF
David Isbitski - Enabling new voice experiences with Amazon Alexa and AWS Lambda
PPTX
Alexa101 course slides
PDF
Building Alexa Skills
PDF
Get Started Developing with Alexa and Drupal
PPTX
Developing alexa Skill using Java With AWS Lambda
PPTX
NUS-ISS Learning Day 2017 - Voice Computing - The Next Digital Disruption!
PPTX
Start building for voice with alexa
PDF
introductiontoalexaskillskit-160426090427.pdf
PPTX
introductiontoalexaskillskit-160426090427.pptx
PDF
Artificial Intelligence at Work - Assist Workshop 2016 - Dave Isbitski Amazon
PPTX
Amazon Alexa - Introduction & Custom Skills
PDF
Developing Skills for Amazon Echo
PPTX
Voice enable all the things with Alexa
PDF
Make your own Amazon Alexa Skill
PPTX
.NET, Alexa and me
PDF
Alexa Skills Kit with Web API on Azure
PDF
Building Public and Business Alexa Skills [Aug 2019]
Digital Muse “Girl Tech Fest - AWS Alexa Skills Coding Workshop
How to Create a Custom Skill
Amazon alexa - building custom skills
David Isbitski - Enabling new voice experiences with Amazon Alexa and AWS Lambda
Alexa101 course slides
Building Alexa Skills
Get Started Developing with Alexa and Drupal
Developing alexa Skill using Java With AWS Lambda
NUS-ISS Learning Day 2017 - Voice Computing - The Next Digital Disruption!
Start building for voice with alexa
introductiontoalexaskillskit-160426090427.pdf
introductiontoalexaskillskit-160426090427.pptx
Artificial Intelligence at Work - Assist Workshop 2016 - Dave Isbitski Amazon
Amazon Alexa - Introduction & Custom Skills
Developing Skills for Amazon Echo
Voice enable all the things with Alexa
Make your own Amazon Alexa Skill
.NET, Alexa and me
Alexa Skills Kit with Web API on Azure
Building Public and Business Alexa Skills [Aug 2019]
Ad

More from Michael Peacock (20)

PPTX
Immutable Infrastructure with Packer Ansible and Terraform
PPTX
Test driven APIs with Laravel
PPTX
Symfony Workflow Component - Introductory Lightning Talk
PPTX
API Development with Laravel
PPTX
An introduction to Laravel Passport
PDF
Phinx talk
PDF
Refactoring to symfony components
PPT
Dance for the puppet master: G6 Tech Talk
PPT
Powerful and flexible templates with Twig
PPT
Introduction to OOP with PHP
KEY
KEY
Phpne august-2012-symfony-components-friends
KEY
Evolution of a big data project
PPTX
Real time voice call integration - Confoo 2012
PPTX
Dealing with Continuous Data Processing, ConFoo 2012
PPTX
Data at Scale - Michael Peacock, Cloud Connect 2012
PPTX
Supermondays twilio
PPTX
PHP & Twilio
PPTX
PHP Continuous Data Processing
PPTX
PHP North East Registry Pattern
Immutable Infrastructure with Packer Ansible and Terraform
Test driven APIs with Laravel
Symfony Workflow Component - Introductory Lightning Talk
API Development with Laravel
An introduction to Laravel Passport
Phinx talk
Refactoring to symfony components
Dance for the puppet master: G6 Tech Talk
Powerful and flexible templates with Twig
Introduction to OOP with PHP
Phpne august-2012-symfony-components-friends
Evolution of a big data project
Real time voice call integration - Confoo 2012
Dealing with Continuous Data Processing, ConFoo 2012
Data at Scale - Michael Peacock, Cloud Connect 2012
Supermondays twilio
PHP & Twilio
PHP Continuous Data Processing
PHP North East Registry Pattern
Ad

Recently uploaded (20)

PDF
Softaken Excel to vCard Converter Software.pdf
PDF
Odoo Companies in India – Driving Business Transformation.pdf
PDF
How to Choose the Right IT Partner for Your Business in Malaysia
PDF
How Creative Agencies Leverage Project Management Software.pdf
PPTX
Introduction to Artificial Intelligence
PPTX
Lecture 3: Operating Systems Introduction to Computer Hardware Systems
PDF
Adobe Illustrator 28.6 Crack My Vision of Vector Design
PDF
Design an Analysis of Algorithms I-SECS-1021-03
PDF
SAP S4 Hana Brochure 3 (PTS SYSTEMS AND SOLUTIONS)
PPTX
Transform Your Business with a Software ERP System
PPTX
Online Work Permit System for Fast Permit Processing
PDF
top salesforce developer skills in 2025.pdf
PDF
Which alternative to Crystal Reports is best for small or large businesses.pdf
PDF
Digital Strategies for Manufacturing Companies
PPTX
ManageIQ - Sprint 268 Review - Slide Deck
PPTX
Odoo POS Development Services by CandidRoot Solutions
PDF
Why TechBuilder is the Future of Pickup and Delivery App Development (1).pdf
PPTX
CHAPTER 2 - PM Management and IT Context
PPTX
L1 - Introduction to python Backend.pptx
PDF
Nekopoi APK 2025 free lastest update
Softaken Excel to vCard Converter Software.pdf
Odoo Companies in India – Driving Business Transformation.pdf
How to Choose the Right IT Partner for Your Business in Malaysia
How Creative Agencies Leverage Project Management Software.pdf
Introduction to Artificial Intelligence
Lecture 3: Operating Systems Introduction to Computer Hardware Systems
Adobe Illustrator 28.6 Crack My Vision of Vector Design
Design an Analysis of Algorithms I-SECS-1021-03
SAP S4 Hana Brochure 3 (PTS SYSTEMS AND SOLUTIONS)
Transform Your Business with a Software ERP System
Online Work Permit System for Fast Permit Processing
top salesforce developer skills in 2025.pdf
Which alternative to Crystal Reports is best for small or large businesses.pdf
Digital Strategies for Manufacturing Companies
ManageIQ - Sprint 268 Review - Slide Deck
Odoo POS Development Services by CandidRoot Solutions
Why TechBuilder is the Future of Pickup and Delivery App Development (1).pdf
CHAPTER 2 - PM Management and IT Context
L1 - Introduction to python Backend.pptx
Nekopoi APK 2025 free lastest update

Alexa, lets make a skill

Editor's Notes

  • #4: Conference bot – a skill to get information about a particular conference. We will use this to find out what is on in a particular room, and what a particular speaker is talking about. The interaction model maps what the user wants to do / find out about (the intent) to our code.
  • #5: Amazon has a whole suite of Alexa base devices which tend to have different features when it comes to processing skills. Its also possible for Alexa to be built into non-amazon devices.
  • #6: In this talk, we are focused around Audio only devices. These are the echo, echo dot and echo plus. They don’t have a screen, and so interaction with them is purely through audio commands.
  • #7: Although the echo doesn’t have a screen, the mobile application serves as a companion app. It displays data sent from the skill. It can be used to send rich media such as images, or just text. Its also handy for improving the performance of Alexa, as it tells you what the alexa device heard, letting you play the exact audio, and confirm that it did the right thing. This is only data you see relating to your devices, you cannot get this information for other devices / users of your skill.
  • #8: Some Alexa devices have a screen, but how they work is again slightly different. FireTV devices and Fire Tablets have Alexa support, but the interaction is essentially exclusively voice – the main difference is that you can display a “card” to the user, which contains companion information. The echo show does have touch screen support and both the show and spot have a camera built in, which allows a little more scope. You can use them to play videos, and there is support within the Alexa skills builders to support this.
  • #9: This is a fire TV response, in addition to reading out the answer, it shows up on the screen.
  • #10: And on the echo show.
  • #11: - Walk through the flow: user asks, device looks up model in alexa, that processes the intent and communicates with your skill (sends a JSON payload) which returns a response (JSON payload) which is then sent to the device to read out and to the companion app.
  • #12: Two sides to a skill: interaction model, and the endpoint (your code)
  • #13: Developer console
  • #14: Give the skill a name – this is just a name, its not used for users to interact with or invoke the skill
  • #15: There are some pre-built models for things like flash-briefing and audio playing skills, we want to create a custom skill, so we select that.
  • #16: Console. Items to configure on the left, checklist on the right, testing at the top.
  • #17: Anatomy: Interaction model, interfaces, endpoint. Interfaces = audio player, display interface for screen and voice interaction and video app for video playback.
  • #18: Interaction model defines how our users will interact with our skill, and how certain voice commands should map to different parts of our skill.
  • #19: Skills need to be invoked, to either open the skill, or tell Alexa that we are wanting a command to processed by a particular skill.
  • #20: Set an invokation name. Not a land grab.
  • #21: In order for Alexa to pass custom information back to our skill, we need to define some slot types. In the context of wanting to ask about a particular speaker or conference room, we would define these as slot types. Why? etc
  • #25: Geography: cities and states, only for certain countries. Date, time, numbers etc
  • #27: Cancel, stop and help
  • #28: Yes, No, stop, skip, and other media playback
  • #29: We want to build some custom intents for our skill. We will want one to tell us what talks are happening now in a particular room, one to tell us about a particular speaker and maybe one to tell us about a particular talk.
  • #30: Add intent – provide a name, and click create.
  • #31: Once we are in the intent management screen, we can scroll down to intent slots, where we can link a slot type to our intent, this allows us to inform alexa that this intent is going to make use of or expect data to be passed in in the form of a slot.
  • #33: We can configure the intent slot to make it mandatory
  • #34: Utterances – these are lists of things a user might say to alexa with the same intent. E.g. what is happening in room A, whats happening in room A, which talks are on in room A. These are all different ways of asking the same question. We need to provide as many different utterances as possible.
  • #35: Creating an utterance. Using curly brace lets us pull in a slot.
  • #36: Lots of utterances
  • #37: Within each section we need to save as we go along, however in order for the settings to be applied to our skill we need to build the skill. This allows the alexa service to essentially compile our intents, utterances and so on, so that it can apply it to incoming voice requests. This verifies the skill data, we cannot test the skill unless it has been built.
  • #39: Install serverless with node, using the –g flag to install it gloabllay on our system
  • #40: Run serverless create to create a new project. We are using the aws-nodejs template to tell serverless this is a project we will deploy to AWS (i.e. lambda functions) and we want to use nodejs. Lambda has support for Python, Node and Java. We also supply a path for where we want the project to be saved locally.
  • #41: The framework will then create a project for us, including some boiler plate, which contains a configuration file and a javascript file.
  • #42: Config file: service name – used as a prefix for the lambda function name when deploying. Details about the provider we will deploy to and the language being used, and a list of functions. These functions listed are how we map functions from our JS file to functions we want to deploy as stand alone Lambda functions. Our JS can have as many functions we want, for internal calls, however only exposed functions defined here are exposed as lambda functions which services such as alexa can call. It is also worth noting that if we want to run any of these functions with serverless, they have to be defined here too.
  • #43: Sample JS, just a function – not alexa specific here.
  • #44: Install alexa node sdk
  • #45: Import the SDK
  • #46: This function is our main handler, registered in our yaml file. We instantiate the alexa SDK, and we register some handlers, the handlers are the code which map to specific intents.
  • #47: Handlers are defined in an object, mapping the intent name, to a call back function to be executed. Here on our launch request, i.e. our skill loads up, we tell Alexa to say “welcome to conference bot” with the speak method, and we pass the name of the skill and the message welcome, to the card renderer. When it comes to deploying or testing this, this will result in JSON output which tells Alexa to do these things, we will come to it later.
  • #48: Since we want our skill to be able to tell us about rooms and talks and speakers, we need to give it access to that data. Ideally we would have our skill communicate with an API, but for the purposes of this demonstration, lets just have some data hard coded in. Since we are going to map slot value ids to data, we use those IDs as keys in our data array.
  • #49: We can put together some helper functions which take the IDs and return relevant data. For the purposes of testing this, I have also added these to my serverless.yml file so they can be locally tested.
  • #50: Finally, we can build up a handler for one of our intents. Here we take the intent from the request, and from that we take the ID of the conference room being provided. Because we are working with IDs and not just the value passed in, its nested quite far down the JSON that Alexa passes to us, but we will see that structure shortly. Once we have the ID, we can lookup the talks for that room, and tell Alexa to say something in response.
  • #51: To locally test a function, we can use the invoke local command within serverless, and tell it which function we want to invoke (must be exposed in serverless.yml) and pass some data. Based on our code, this means if I pass in my speaker ID it will tell me which talks I’m presenting.
  • #52: Now that we have a skill, we need to deploy it. To deploy it, we need to give serverless our AWS credentials. We can store this in the project if we want to, but that’s not good for security, we don’t want to just use some global settings, as we might have multiple AWS accounts, so instead we store the credentials against a profile. A profile is just a name we associate with the credentials. They are stored in our home directorry, so are not part of the project
  • #53: We then tell the project which authentication profile to use
  • #54: When we are ready to deploy, we just run serverless deploy.
  • #55: Serverless will then build a lambda stack for our project, zip up our function code, upload it to Amazons simple storaeg service, and link this to our new function.
  • #56: If we look at AWS lambda, we now have a number of functions, one for each defined in our serverless.yml file, the top one here being our Alexa entry point, the other two being ones created for local testing.
  • #58: Within the settings for our lambda function on AWS, there are some test options at the top, from here we can configure a test event. This essentially allows us to save a JSON payload which we will then fire at our lambda function, and be able to see the response from within the console.
  • #59: We should pick an alexa template, the MyColorIs is one which has a slot in it
  • #60: This is the template: it shows a sample alexa JSON payload, with a slot being provided, in this case it’s a colour with a value of blue.
  • #61: We can customise this to match our intent, our slot type and our slot value. NB: this is based off slot value (not IDs, so we will need to edit this to be based off an ID, however for the purposes of showing this, the skill code was set to work off the value)
  • #62: Here we see the response, and log output. We can log via console.log in our skill code, as I’ve done where it says ”Alexa, lets make a skill”
  • #63: This is the JSON request we would use for when an ID is provided. It shows how a value for a slot is resolved to an ID. Not too sure about the detail in here, but it seems to imply there could be other services which we could use to work out what slot value is being provided.
  • #65: Within the lambda configuration we can add a trigger, this tells lambda what is allowed to invoke or trigger the function
  • #66: We will select Alexa, which gives us some configuration options (next slide)
  • #67: Including if we want to restrict in bound requests to a specific skill id. If the skill id doesn’t match the function won’t invoke.
  • #68: Set the endpoint within the Alexa console. This is the opposite of what we have just done, here we tell Alexa that once the skill has been invoked and the intent and slots resolved, it should then send the request to our endpoint, which for us is a particular lambda function. The alternative to a lambda function is an HTTP endpoint.
  • #69: Testing via the console. Jump to the console and say “ask conference bot what is happening in fontaine e”, walk through the JSON in and out
  • #70: SSML: Speech synthesis markup language, lets us customise the voice response. All sorts of different things available, including spelling things out, saying numbers as words, changing emphasis, there are also specific words or sayings that alexa is pre-programmed to say in a specific way.
  • #71: Data peristance with Alexa can easily be done on a per skill-install basis. Give your Lambda function access to Dynamo db, give the Alexa SDK a table to use, and then just store data in this.attributes array.Alexa seamlessly handles this and stores the data mapped to an ID representing the user of the skill (i.e. this installation of the skill)
  • #72: Of note: alexa serverless plugin and improvements to the alexa CLI. Also interaction model can be defined as JSON.