Exploring App Actions on Android: What are App Actions?

At Google I/O 2018 we were introduced to App Actions, but it was only at I/O this year that we were given the ability to hook into this functionality within our own apps.

In this first part of articles focused on App Actions I want to take a quick look into exactly what they are, how they work and what they can do for our apps. Throughout the following articles we will look deeper into these topics and learn how to implement app actions for ourselves.


App Actions allow us to extend the power of the Google Assistant to allow deep linking into our applications. With these two technologies coming together we are able to bring a collection of advantages to our applications, for our users.

The Google Assistant by itself already greatly simplifies a collection of user flows on a device. Let’s take making a phone call as an example – if I was to click through screens and options to call a contact of mine, we end up with something looking a little like this:

On the other hand, using the Google Assistant on our device allows us to achieve a slightly different flow:

From the above diagrams we can see that the user interactions has been cut down from 7 steps to 2 steps. This gives us a far more streamlined experience – and with this being such a common task for mobile phones, think about how many steps have been reduced over all the people using this functionality.

These kind of flows also represent an important factor in UX, accessibility. Whilst screen readers are usable to be able to carry out the first user flow displayed above, it may at first feel quite tedious for users who are new to these accessible approaches. However, if a user is able to speaker to their device or user a keyboard then telling it the phrase “OK Google, call john smith” makes this process far more accessible than it previously was. This flow not only requires less device interaction, but it’s also less cognitive load when doing so.


With App Actions we get the same kind of result as the above. Because we can now provide deep links for the Google Assistant to hook into, this means it can launch specific points of our application – this allows us to remove all the steps required to manually navigate to that point of our app before even starting the desired flow.

Let’s take another example of a third-party application making use of this to streamline the experience for users. Nike Run Club is a running app that allows you to track runs and much more, for now I just want to focus on the aspect of recording a new run. Looking at the user flow for recording a new run below, we see a similar flow for manually starting a phone call:

With the use of App Actions, Nike Run Club have implemented their own action that allows users to start a run with a single interaction using the Google Assistant:

After interacting with the assistant using the term “Start my run in Nike Run Club”, the application will open on the activity tracking screen and start the run after the countdown timer. Again we see the number of steps that are taken to usually trigger this flow go from 7 steps right down to 2 steps.

With App Actions you’ll be able to take the same approaches into your own applications, allowing you to streamline the experience for your users for different parts of your applications.

We can already see a common pattern that comes together from the above examples. Our user is required to first navigate to our app and then navigate to the desired functionality within that app. Even the initial part of navigating to the application itself can introduce a lot of friction – some users will have a lot of applications on their device, so even finding that one app amongst the collection on their device can be a task in itself. Whilst the user may have their applications organised, this still involves a certain number of screen interactions to launch that application.

The same also holds true for the navigation to a specific feature inside of an application. Whilst some apps are quite focused and make it simple to reach core parts of the experience, others may be bloated with many different features or simply not make it easy to navigate to the feature that you are looking to launch. This integration with the functionality of the Google Assistant aims to help solve these pain points when it comes to the engagement with applications.


How it all works

When it comes to App Actions, there are a couple of different parts that tie everything together. We have the Google Assistant which is essentially the core driver of App Actions, this will be used to handle any queries that are made by the user along with fulfil any of the requests to display slice data. Next we have the Android App which reacts to any requests that are sent to it by the Google Assistant, it will also fulfil any deep links that are triggered. An actions.xml file is defined by the Android App which states the different entry points for the Google Assistant. These entry points are mapped against intents which are supported by the Google Assistant. All of this process is triggered by the User who will then consume either the deep link of slice data that is served to the device.

One thing you may notice here is the support for both deep links and slices. When it comes to handling App Actions we can provide a response through either one of these methods, the route that we decide to go down will be dependant on the flow that the current user is operating. For example, if the user is querying something that can be satisfied in a simple response that does not need to be within the context of inside the app then providing slice data can be enough. On the other hand, if the that is not quite enough to convey the information to the user then the deep link can be provided to take the user into the application to satisfy their query.


Connecting assistant requests with your app

When it comes to these integrations with our app, we are required to make use of intents which are supported and specified by the Google Assistant. Currently there is a small collection of intents supported, but this will be expanded in the future. These supported intents currently fall into four different categories of applications:

If your application falls into one of these categories then you can hook into some of the already supported intents to start providing support for App Actions right away. Whilst each of these categories only supports a small subset of intents, these will also grow to support more over time.

When it comes to how this fits together, there are a couple of different parts involved in the process. To begin with, our Android app will define an actions.xml file. This file will declare the supported intents which we are providing deep-links for – the declarations within this file will act as an entry point for the Google Assistant to hook into. So for example, if we have added support for the START_EXERCISE intent – when this intent is triggered via the assistant alongside our application name, our application will receive the link and data for the request. At this point our application will use this deep-link to open our app at the required screen, parsing and using any data that has been passed through with the request.

We’ll cover these intents more deeply in the next part of this series.


From this brief introduction to App Actions we’ve been able to learn a little more about what they are, along with a high level view of the different moving parts that come together to provide our users with these Actions. In the following parts of this article series we’ll dive into each section so we can learn more about how App Actions work and how we can built them out into our own projects.

In the meantime, if you’d like to learn more about building app actions then you can check out the Google App Actions Codelab to get building one for yourself!

[twitter-follow screen_name=’hitherejoe’]

Leave a Reply

Your email address will not be published. Required fields are marked *