Last week Firebase announced a new feature within ML kit that was entering its beta stage, Smart Reply. If you’re not familiar with Smart Reply, the functionality allows applications to supply a collection of suggestions for user input based off of previous content from the current context. A common use case for this is within email and messaging applications, you may have already seen it in apps such as Gmail:
You may have not used these specifically in gmail, but it’s likely that you’ve seen them in other applications that you use. And now thanks to Firebase ML kit, you’ll be able to implement that same functionality within your own apps with Firebase Smart Reply.
In a nutshell – Smart Reply makes use of Tensorflow under the hood to provide a collection of suggestions based on the users previous input. So as a developer we just need to provide the library with a collection of previous context that we wish to be used for the prediction logic. Tensorflow will then take this content and predict the next possible routes within the conversation – these predictions will then be provided as suggestions for the user to select.
Smart Reply runs completely on the device, so no network connection is required to generate suggestions. Currently in the beta version only the English language is supported, but this might change as things move forward. When generating these suggestions the 10 most recent messages will be used to provide suggestions – smart reply will also check that the textual content is not of a sensitive topic, if so then it will not provide suggestions for these, similar to the way that it will not currently do so for non-english languages.
With that in mind, let’s dive straight in with how we can get this working for ourselves!
To begin with, you’ll need to setup Firebase in your project. I’m not going to cover this here as the Firebase documentation does a great job already – this is the same process that you would have followed in other projects where you may have used Firebase.
You’ll next need to add the required dependencies for smart reply:
implementation 'com.google.firebase:firebase-ml-natural-language:18.2.0' implementation 'com.google.firebase:firebase-ml-natural-language-smart-reply-model:18.0.0'
We also need to add the following to avoid any compression taking place on the Tensorflow components:
android { // ... aaptOptions { noCompress "tflite" } }
Now that we have the configuration side of things setup, we can go ahead and dive into the implementation of smart reply. When it comes to this, there isn’t really too much that we need to do from our side – this is because most of the work is handled via the library itself. The general developer flow for handling smart replies looks like this:
There are two points at which we are going to want to handle the generation of smart replies – both when the conversation is first opened (Create conversation history) and when the conversation is updated. Once either of these events have occurred, we make use of ML kit to retrieve the smart replies for us based on the provided conversation content. This request is asynchronous, but once completed we can either display the provided smart replies or handle the error state should the retrieval of these replies fail.
We begin by creating a reference to a collection of conversation content. Each conversation item reference is stored as an instance of a FirebaseTextMessage. This class is a part of the ml kit library and holds a reference to the textual content of the message, as well as the timestamp of when the message was created.
val chatHistory = ArrayList<FirebaseTextMessage>()
Now that we have this reference, we actually need to populate it with content from our conversation. As previously noted, there are two cases where we will need to handled the change of message content – when the conversation is first created and when it is updated. In both of these cases we need to add the textual content as a FirebaseTextMessage instance, but the data needs to be provided in a way that ml kit knows whether it is a message that has been sent by the current user on the device or one that has been received from another user in the conversation
Luckily the Smart Reply library provides us with two different FirebaseTextMessage creation functions which handle both of these routes. If we are adding a message that was sent by the user on the current device then we make use of the createForLocalUser method – this takes the message text along with the timestamp that the message was created.
chatHistory.add(FirebaseTextMessage.createForLocalUser(messageText, timestamp))
When we are adding message content for a remote user (to handle messages that are being received in the conversation) then we make use of the createForRemoteUser method.
chatHistory.add(FirebaseTextMessage.createForRemoteUser(message.text, message.timestamp, userId))
You can see here that the methods are almost identical, with the addition of a userId field for the creation of a remote user message. Because there may be multiple users messaging us from a remote source, smart reply needs to be able to uniquely identify the sender of a message so that the reply suggestions can be created with the correct context.
Now that we’ve created the required data with our messages, we need to actually generate our smart replies. For this we just need to make use of the suggestReplies() method from the library, passing our FirebaseTextMessage instances along with the call.
FirebaseNaturalLanguage.getInstance().smartReply .suggestReplies(chatHistory)
Once we’ve made this call we’re passed back an instance of a SmartReplySuggestionResult – this instance will contain a collection of suggestions along with a status for the operation. Before handling the suggestions it’s best to check the status, as this could be one of the following:
- STATUS_NOT_SUPPORTED_LANGUAGE – The language used within the conversation isn’t support by the model handling smart reply
- STATUS_NO_REPLY – There were no replies that could be generated for the conversation
- STATUS_SUCCESS – Replies were generated successfully
When the generation of smart replies is successful there will be between 1 and 3 suggestions that you can make use of. Each of these will be in the form of a SmartReplySuggestion instance where you can use getText() to retrieve the textual content for the smart reply.
Once we have this complete flow for smart reply retrieval, it will look something like this within our code:
FirebaseNaturalLanguage.getInstance().smartReply .suggestReplies(chatHistory) .addOnCompleteListener { if (it.isSuccessful && it.result?.status == STATUS_SUCCESS) { // do something with it.result?.suggestions } else { // handle error } }
You can see here that we add a complete listener, followed by checking as to whether the request was successful and that the status also represents a success state. Here, the request could also fail – maybe the actual request was not successful or the result status for smart reply retrieval did not return a success state. In those cases, it’s up to your application as to how it handles those situations. If users are used to seeing suggestions for reply content within your application, then it could be worth letting them know that these could not be generated, given that the generation has failed even after retries of the request.
Once you’re all done with the above, you can make use of the close() method to shut down the smart reply models and resources.
FirebaseNaturalLanguage.getInstance().smartReply.close()
Once we’ve put all of the above into action, we can display suggestions in our app using the Smart Reply functionality from ML kit. A common look and feel for these is via Chip widgets, but that will likely depend on the design of your application.
You can view full sample code for Firebase Smart Reply over in the official repository, here.
With all that said, this article aimed to introduce you to smart reply and provide solid examples of how you can add this to your own applications. Smart Reply will help us to create simpler experiences for our users, allowing them to interact and converse with users with less friction. Are you going to make use of smart reply? Do you have any questions of how to get started? Feel free to reach out to me if so and share some thoughts and feedback!