Exploring CameraX on Android: Camera View

camera

If you’ve ever used the Camera APIs on Android, you may have felt that they’ve never been the simplest thing to implement. There was originally the Camera API, which was deprecated in favour of the Camera2 API – this iteration aimed to provide developers with a better experience when dealing with camera APIs on Android. However, with this there was still a lot of boilerplate involved when using the camera API (even for simple use cases) and a lot of the difficulties still existed when it came to dealing with implementing camera features within Android applications. Luckily for us, the new CameraX API aims to alleviate these pain points for us by providing a simpler solution to camera feature development. Whilst, CameraX is built on top of the Camera2 API, it greatly simplifies the implementation process for minSDK 21 and above. In this article we’re going to dive into the first part of the CameraX API, learning what it is and how we can get started with it in our applications.


Setting up CameraX

The CameraX component consists of two different concepts which make up its implementation – these are the the Camera View and Camera Core. The Camera View alone can be used alone to handle basic camera requirements, such as taking a picture, recording a video, lifecycle management and camera switching. When it comes to more complex CameraX implementations, the Core library can be used alongside the CameraView to handle these situations (such as providing a view finder for the current camera context). In this article, we’ll take a look at how the initial Camera View component works.

To begin with there are a couple setup steps to have our project configured and setup to make use of the CameraX library. However, it is does not take many steps (or even code) to go from adding the permission to having a simple camera implementation within your app.

steps

We’ll begin by adding the Camera permission to our application manifest:

<uses-permission android:name="android.permission.CAMERA" />

And next, we need to add the required dependencies to our project:

def camerax_version = "1.0.0-alpha01"

// Add the CameraX core dependency implementation
“androidx.camera:camera-core:${camerax_version}”

// Add the CameraX Camera2 API interop support dependency
implementation “androidx.camera:camera-camera2:${camerax_version}”

Note: The CameraView isn't avilable just yet, but you can view the source here. Because of this, the implementation details are likely to change.

You’ll notice that there are two different dependencies here:

  • The Camera Core library provides us with the required classes for using the CameraX library
  • The CameraX Camera2 dependency provides us with some interop features so that we can integrate CameraX with our existing Camera2 implementation

And with the above in place, we can now look at how we would go about implementing the camera view component to our application.


CameraView

As mentioned, the CameraView provides a way for developers to provide a basic camera implementation within their apps without much overhead. We can add this component directly to our layout file:

<androidx.camera.view.CameraView
    android:id="@+id/view_camera"
    android:layout_width="match_parent"
    android:layout_height="match_parent" />

This CameraView class is a ViewGroup that essentially contains a TextureView to handle the display of the camera feed, along with a collection of attributes that can be used to configure how the component operates.

props

  • scaleType – set the scale type to be used for the captured stream. This can be one of either CENTER_CROP or CENTER_INSIDE
  • quality – set the quality to be used for the captured media. This can be either MAX, HIGH, MEDIUM or LOW
  • pinchToZoomEnabled – a boolean value representing whether or not the user can pinch-to-zoom within the camera view
  • captureMode – the capture mode to be used for the camera view. This can be either IMAGE, VIDEO or FIXED
  • lensFacing – set the lens to be used for the media capture. This can be one of either  FRONT, BACK or NONE
  • flashMode – set the flash mode to be used for the camera view instance. This can be either AUTO, ON or OFF

Whilst you can set these from XML attributes from your layout file, you can also toggle them programatically. So if you want to provide UI components that set the value for any of the above, click listeners can be used to toggle the state of the assigned values.

Now that we have the CameraView laid out within our activity we can make use of its bindToLifecycle method to bind the view to the lifecycle of the current component. This means that our camera view will start and stop based on the lifecycle that it has been attached to.

class MainActivity : AppCompatActivity() { 

    override fun onCreate(savedInstanceState: Bundle?) { 
        ...
        view_camera.bindToLifecycle(this) 
    }
}

Once the above has been configured and added to our project, we have a simple CameraView ready to capture media within our app. It’s important to note that the CameraView alone can not be extended for use from much more than the above. The aim of the view is to provide a simplified camera implementation that is conveniently available in the form of a view. If you want to achieve more than this then you will need to use the CameraX Core library, which we will cover in the next article.

If you’ve added the above to your application then you should be able to launch a camera and see the preview displayed on screen. The CameraView comes with a collection of methods that we can trigger when the user interacts with the UI:

  • toggleCamera() – Toggle the camera being used on the device (e.g between the front and back camera)
  • enableTorch() – Enable the torch on the device
  • setCameraByLensFacing() – Set the camera to be used via a Lens facing the given direction. This can be either LensFacing.BACK or LensFacing.FRONT.
  • hasCameraWithLensFacing() – Check whether or not the camera has a lens with the corresponding LensFacing value
  • focus() – Use the camera to focus on the given Rect instances

When it comes to taking photos, we have access to a takePicture method that can be used to capture an image from the camera. Here we need to pass in a file reference for where the image data should be saved to, along with a listener for when the image has either been saved successfully or when an error has occurred.

camera_view.takePicture(File("some_file_path"),
    object : ImageCaptureUseCase.OnImageSavedListener {

        override fun onImageSaved(file: File) {
            // Handle image saved
        }

        override fun onError(
            error: ImageCaptureUseCase.UseCaseError,
            message: String,
            throwable: Throwable?
        ) {
            // Handle image error
        }
    })

When recording a video causes an error, the ImageCaptureUseCase.UseCaseError will return us one of the following error states:

  • UNKNOWN_ERROR
  • FILE_IO_ERROR

There is also another format of the takePicture() method which just takes an instance of the OnImageCapturedListener callback. This can be used to listen for when images are captured (or an error occurs) and then handle the result data accordingly. Whilst the previous takePicture() method provided a simpler approach, this gives you more flexibility when handling image captures.

camera_view.takePicture(object :     
    ImageCaptureUseCase.OnImageCapturedListener() {
        override fun onCaptureSuccess(
            image: ImageProxy, 
            rotationDegrees: Int
        ) {
            // Handle image captured
        }

        override fun onError(
            useCaseError: ImageCaptureUseCase.UseCaseError?, 
            message: String?, 
            cause: Throwable?
        ) {
            // Handle image capture error
        }
    })

We may also want to record video using our CameraView. For this, we make use of the startRecording() method – we just need to pass in a file reference that is to be used for saving the result, along with a listener to handle the success and error state of the operation.

camera_view.startRecording(File("some_file_path"),
    object : VideoCaptureUseCase.OnVideoSavedListener {

        override fun onVideoSaved(file: File?) {
            // Handle video saved
        }

        override fun onError(
            error: VideoCaptureUseCase.UseCaseError?, 
            message: String?, 
            throwable: Throwable?
        ) {
            // Handle video error
        }
    })

Here you can see that the onVideoSaved method returns us a File instance for our saved video data. We also have the onError function that can be used to handle the error state, reacting accordingly within our UI. When recording a video causes an error, the VideoCaptureUseCase.UseCaseError will return us one of the following error states:

  • UKNOWN_ERROR
  • ENCODER_ERROR
  • MUXER_ERROR
  • RECORDING_IN_PROGRESS

When the user wishes to stop recording video, we just need to call the stopRecording() method to let the use case know that we wish to halt the recording process:

camera_view.stopRecording()

And finally, when we’re finished with our CameraView and all of its operations above, we must be sure to unbind the camera to free up any used resources.

override fun onDestroyView() {
    super.onDestroyView()
    CameraX.unbindAll()
}

In this article we’ve looked at the new CameraView available in the CameraX library, learning about how it works and what is capable when using it. This is a huge improvement when it comes to implementing camera features within android applications, especially so when advanced functionality is not needed. Will you be using the CameraView? If you have any thoughts or questions that you’d like to share, then please do reach out!

In my next article I’ll be writing about the Use Case classes found within the CameraX Core library, be sure to follow me to see when this is posted 🙂

[twitter-follow screen_name=’hitherejoe’]