EmitEvent: The traffic cop of the Alexa Skills Kit SDK for Node.js
The EventEmit
function is the traffic cop of the ASK SDK, taking the state and shuttling intents to the right place. In this post, we’ll look at what’s going on. As a reminder, this is the Dig Deep series, where we look line-by-line at the tools and libraries we use to build voice-first experiences. This is not the place to go for tutorials, but if you want to learn interesting little nuggets about what you use every day, off we go…
Immediately, we see that we’re going to be using our current state, which is on the session attributes object.
Next up is a long conditional, checking for types of requests.
NewSession
is invoked at, well, exactly what it sounds like: the beginning of the session. This could be the user saying Alexa, open Custom Skill
, but it could also be Alexa, ask Custom Skill to do something custom
.
LaunchRequest
is invoked when the skill is launched without a specific intent. In our previous example, that would be Aelxa, open Custom Skill
. If it’s also handled by the previous one, how does this condition ever come into play? The NewSession
condition also checks to see if there are any NewSession
handlers for the given state. If not, it falls through to LaunchRequest
.
What this means for us is: we don’t need LaunchRequest
without a state if we have a NewSession
handler. And we never need LaunchRequest
with a state, because that’s a contradiction in terms.
IntentRequest
is when the user says something for a given intent that is provided in your sample utterances that aligns with your intent schema. This is the one that is generally most intuitive to people, because it is something we define ourselves and we see in our configuration. If this is the event type, our eventString
is the intent name.
SessionEndedRequest
happens in three situations: the user asks to end the skill, the user does nothing when your skill is waiting, or an error is thrown.
The next two are namespaced requests for audio streaming (AudioPlayer
) and audio streaming interaction with a remote control (PlaybackController
). On that subject, does anyone actually use the remote control that comes with the Echo?
The last one is for the Echo Show and checks to see if an element on-screen was selected.
You already understand what this does, but I wanted to highlight it because it shows that our state-based events are nothing more than concatenated events and states. If we wanted to, we could register these handlers directly rather than using Alexa.CreateStateHandler
. In other words, we don’t have an AMAZON.YesIntent
on our States.GUESSMODE
state, we have AMAZON.YesIntent_GUESSMODE
.
Next up we’re checking to see if this listener exists at all, using the listenerCount
method. This isn’t configured here, but instead is part of EventEmitter
, of which this
is inherited. You can find the source for listenerCount
right here but you’ll have to understand it yourself, because this is the Node.js source in-depth series. But it’s simple enough.
Notice the focus on the Unhandled
event here. I don’t think the official docs do a good job of explaining this or pointing it out, leaving people who are new to Alexa development largely to figure it out on their own. What you need to know is that if the user does something you haven’t accounted for in a given state, you need an Unhandled
handler to clean up the mess.
Finally, at the end, the event gets emitted for our handler to take care of. Of course, those handlers have to be registered somehow and that’s what we’ll look at in the next in the series, on @RegisterHandlers.