Exploring Firebase MLKit on Android: Introducing MLKit (Part one)

At Google I/O this year we saw the introduction of Firebase MLKit, a part of the Firebase suite that intends to give our apps the ability to support intelligent features with more ease. The SDK currently comes with a collection of pre-defined capabilities that are commonly required in applications — you’ll be able to implement these in your application regardless of whether you are familiar with machine learning or not.

Now, what Firebase ML Kit offers to us is already possible to implement yourself using various machine-learning technologies. The thing with Firebase ML is that as well as offering these capabilities underneath a form of wrapper, it also takes these technologies and offers their capabilities inside of a single SDK.

Whilst we can implement these things without Firebase ML, some reasons why we may not be able to do so may be due to:

  • Lack of machine learning knowledge may hold us back from being able to implement such features — maybe we find it overwhelming or just don’t have the time to be able to ramp up in these areas
  • Finding machine learning models that are super accurate and well trained can be not only difficult, but at the same time hard to choose which ones to use and then optimise for your platform.
  • Hosting your ML model for cloud access may also be something to bring difficult to your ML implementation. Packaging it within your app can sometimes be a more straightforward approach, but that itself comes with some drawbacks.

With these in mind, it can be difficult to know where to start. This is one of the main goals of Firebase ML Kit — to make Machine Learning to our Android and iOS applications more accessible to developers and available in more apps. Currently ML Kit offers the ability to:

  • Recognise text
  • Recognise landmarks
  • Face recognition
  • Scan barcodes
  • Label images

To be able to utilise these features all we need to do is pass our desired data to the SDK and in return we will receive the data back dependant on what part of ML Kit we are using. The data returned will be dependant on the machine learning capability being used, you will just need to extract the data from the response that is returned to you.

And if one of these above does not satisfy your machine learning requirements, Firebase MLKit offers the ability for you to upload your own custom tensorflow lite models so that you don’t need to worry about the hosting of these models or the serving of them to your users devices.


One of the nice things about Firebase ML is that it offers it’s machine learning abilities both on the device and on the cloud, this allows you to be creative and mindful of how and when you use machine learning. For example, some operations may be intensive so we have to bear this in mind — luckily though we have this choice of whether we want to use on-device or cloud learning for most of the firebase ML capabilities. On-device APIs in Firebase MLKit are designed to work fast and will be able to provide results even when a network connection isn’t present. On the other-hand, cloud-based APIs utilise the Google Cloud platforms Machine Learning technology to provide a increased level of accuracy

An example of thinking about when to use on-device and cloud-based learning

For example, we may want to base our usage of on-device or cloud-based dependant on a number of different factors. The diagram on the left-hand side here is just an example — but maybe the user would be able to specify the method of learning they want to use, or maybe we would base the method off of the connectivity of the device as well as the quality of the connection. If you do decide to be doing something along these lines then it’s great having Firebase provide the ability for you to decide the source in which the learning results are going to be retrieved from.

But saying that, we don’t actually have control for all of the recognition that is available through MLKit. As you can see from the summary below, not all capabilities are available on-device, just as some are not available on the cloud also.


Once you have added Firebase you your project, to get start with MLKit you will need to begin by adding the base vision dependency to your project.

implementation 'com.google.firebase:firebase-ml-vision:15.0.0'

Each of the ML capabilities will require this dependency, some require an extra individual one but we will cover those through this series of articles.

At this point, you will have the vision tools available to your app. This is where you will provide the input data to the model from your application, maybe this will be content for barcode scanning or face recognition etc, then MLKit will provide back some result value based off of this data which you can then apply to your application.

By default for on-device training, the ML model will be downloaded when it is first used by your application. However, if you wish to download the required models at install time then you can do so by adding the following meta data to your manifest:

<meta-data
    android:name="com.google.firebase.ml.vision.DEPENDENCIES"
    android:value="barcode, face, other_model_names..." />

This should really depend on your application. If the ML model is a core part of your application experience then this would make sense, otherwise the models should be downloaded as they are required.


I hope this has been a nice introduction to Firebase MLKit, I’m excited to explore each of the capabilities with you. Stay tuned over the coming weeks as we dive into each of these. In the meantime, feel free to leave any questions or comments below 🙂

Leave a Reply

Your email address will not be published. Required fields are marked *