Exploring Dialogflow: Understanding Agent Interaction

Dialogflow is a powerful tool that allows us to create conversational tools without the complications of needing to handle natural language processing. But before we dive into the platform, it’s important to understand all of the different concepts that tie together to create the conversational agents that we can create. When I started exploring the platform I jumped in without knowing what was what — so in this article I want to quickly run through each of the concepts to help provide some foundational understanding for the platform.

Just as you would say Hello to your friend before conversing with them, invoking an agent on the actions platform is carried out in the same way — this kicks off the experience with our Agent in a conversational manner. At this point, this is the user requesting to speak to our agent — this invocation is detected using the recognisable terms that we define in the Dialogflow console. This allows us, as the developers, to define the way in which our conversational experience is started

An Intent is a specific action that the user can invoke by using one of the defined terms in the Dialogflow console. For example, the user could ask “Is it going to rain today?” or “Where is the nearest pizza restaurant” — if these are terms defined within the console, then they will be detected by Dialogflow and the intent that are defined under will be triggered.

The list of defined intents for your agent can be found by navigating to the Intents navigation item within the menu on the left in the console.

An Intent allows us as developers to define a selection of individual tasks that can be invoked by the user. You should aim to keep these intents focused and concentrate on the functionality that they are crated for, this allows us to keep the length of the invocation short and give the user the desired response in a shorter time frame.

Within our Intent we are able to define a list of User Says options — this allow us to define different phrases which can be said by the user in order to trigger our intent. This list of User Says options can be found by navigating into the desired intent:

The more User Says options that we define, the easier it will be for users to trigger their desired Intent. Above you can see a list of varying options, but it would be possible for me to grow this even further. For example, when talking about wanting to learn how to play a guitar chord the sentence could begin with many different actionable words:

  • Show me how to play a chord
  • Teach me how to play a chord
  • Tell me how to play a chord
  • Can you teach me a chord
  • I want to know how to play a chord
  • I want to learn how to play a chord
  • How do I play a chord

The list is probably endless — but from this example alone you can see that all the requests end the same, but every invocation here begins in a different way and it’s important to cater for all the different possibilities to provide the smoothest experience with our agent.

An Entity is a property which can be used by Dialogflow to answer the request from the user — the entity will usually be a keyword within the request such as a name, date, location etc. When the user speaks or types their request, Dialogflow will look for the entity and the value of the given entity can be used within the request.

Dialogflow already contains a set of pre-defined system entities which can be used when constructing intent. If these are not enough, we also have the ability to define our own entities for use within our intents. We can define these entities within the Dialogflow console — the screenshot below demonstrates this:

On the left-hand side you define the value for the Entity, this is the value to be used when the Entity is detected by the system. And on the right-hand side are the synonyms for each value — this allows us to define the different phrases that can be used for the entity, allowing us to provide a greater chance at recognising the users request.

At this point, Dialogflow has the request from the user (along with the entity values to be sent with the request), so it now needs to request the information to fulfil the users request. Now this data is to be sent to our web-hook so that the required information can be fetched (this will be dependant on your implementation). Once the web-hook has fetched our required information it will send it back to Dialogflow so that it can be presented to the user in the desired manner.

The response is the content which Dialogflow will deliver to our User once the request for fulfilment has completed. On devices with screens this will consist of textual content and rich content if present — the textual content will be spoken to the user. On hardware devices without screens the content will only be read out to the user.

The Context is used to keep a reference to parameter values as the user moves between intents throughout the conversation. Context is a powerful concept as it allows us to make decisions in our responses based off of these previous responses, repair conversations that may become broken for any reason and also branch off into different intents to create a fluid conversation with the user.

I hope this breakdown of concepts has helped to give a foundation of knowledge when it comes to Dialogflow, helping you to jump into the dashboard with confidence and begin building an agent of your own 🙂

I’d love to chat about Dialogflow or anything you’ve read in this article. Feel free to drop me a tweet or leave a response below!

Leave a Reply

Your email address will not be published. Required fields are marked *