In my last article we introduced the CameraX API, along with the Camera View component found within the source of the project. In this article we’re going to dive into the second part of the CameraX API – the core library to learn what it is and how we can make use of the functionality that it provides for our applications.
Camera Core Use Cases
The Core library brings us the concept of what are known as Use Cases. These Use Cases implement specific pieces of functionality that we can use to ease the implementation process for common camera requirements. When it comes to CameraX, there are several different use cases which have been catered for – these consist of:
- Preview – Used to prepare a view finder for the camera preview. This can be bound multiple times in any given context.
- Image Capture – Used for low latency image captures. This can only be bound a single time in any given context.
- Image Analysis – Used to perform analysis on images. This can be bound multiple times in any given context.
When it comes to implement these use cases, we can make use of the Camera Core API to do so. This involves several steps within our development workflow:
- Specify the use cases which we wish to implement
- Configure the uses case operate with the desired properties
- Add listeners to handle any data that is output
- Bind the use case to the lifecycle of the enclosing component
To get started with implementing these use cases, we need to begin by adding the camera-core dependency to our project:
def camerax_version = "1.0.0-alpha01"
implementation "androidx.camera:camera-core:${camerax_version}"
Use Case configurations
Each use case class takes some configuration in the form of a Config instance. This interface is used to define a common set of functionality that is used across each of the subclasses for each use case. If you jump into the source of the Config file, you’ll notice there is a lot of library specific definitions. A Config class is used to hold a collection of options and values which are used to represent the configuration details for the corresponding use case, you’ll manipulate these values through the use case class so don’t worry too much about what is contained within this Config interface.
For each use case that you wish to implement you will need to instantiate one of these Config classes, provide the desired options / values and then assign it to your use case. We’ll go through how to use each one of these in the corresponding use case sections below.
Preview
We’re going to begin by looking at the Preview use case. This use case can be used to provide a preview of the current camera stream within a SurfaceTextureView – this view can then be connected to a corresponding TextureView to display the camera content on screen. So to begin with, we need to add a TextureView to our layout to be able to house the content from our Preview:
<TextureView android:id="@+id/preview" android:layout_width="match_parent" android:layout_height="match_parent" />
Before we can implement the use case, we’re going to configure some options that will be used for our view finder. This configuration takes the form of a PreviewConfig class, we’ll use its Builder to configure some of the options for our use case:
val previewConfig = PreviewConfig.Builder().apply { setLensFacing(CameraX.LensFacing.FRONT) // configure other options }.build()
There are a number of options that we can apply when configuring this instance of our PreviewConfig:
- setTargetName() – Sets a unique name for identifying the configuration
- setTargetResolution() – Sets a target resolution in the form of a minimum bounding area for the preview
- setLensFacing() – Sets the lens which is to be used for the viewfinder in the form of a LensFacing value. This can be one of either LensFacing.FRONT or LensFacing.BACK
- setTargetAspectRatio() – Takes a Rational instance that defines the aspect ratio to be used for images
- setTargetRotation() – Used to pass in the current screen orientation for convenient orientation handling.
- setCallbackHandler() – Provide a handler to be used as a default for any callbacks
Next we need to create a new instance of the Preview class, passing in our defined configuration. We then set the onPreviewOutputUpdateListener on our Preview instance to observe any output that comes from our use case.
val previewUseCase = Preview(previewConfig) previewUseCase.onPreviewOutputUpdateListener = Preview.OnPreviewOutputUpdateListener { val viewGroup = view_texture.parent as ViewGroup viewGroup.removeView(view_texture) viewGroup.addView(view_texture) view_texture.surfaceTexture = it.surfaceTexture }
Within the listener above you can see that we take the generated surfaceTexture and assign it to the TextureView that we have defined within our layout. This surfaceTexture contains the content of the camera feed, so here we are just rerouting this output to be shown within our activity / fragment layout. You’ll notice that we remove and then re-add the TextureView – this is so that the content can be laid out again to be shown correctly on the screen.
The PreviewOutput that we receive within this callback is a collection of data that we can make use of, retrieved via the interface methods that are provided:
- getSurfaceTexture() – Returns a SurfaceTexture instance that contains the given image data
- getTextureSize() – Returns a Size instance that represents the dimensions of the given SurfaceTexture
- getRotationDegrees() – Returns an int value that represents the rotation value of the SurfaceTexture in degrees
Now, if you’ve looked at the codelab for CameraX, you may have noticed the following method being called within the OnPreviewOutputUpdateListener callback:
private fun updateTransform() {
val matrix = Matrix()
// Compute the center of the view finder
val centerX = viewFinder.width / 2f
val centerY = viewFinder.height / 2f
// Correct preview output to account for display rotation
val rotationDegrees = when(viewFinder.display.rotation) {
Surface.ROTATION_0 -> 0
Surface.ROTATION_90 -> 90
Surface.ROTATION_180 -> 180
Surface.ROTATION_270 -> 270
else -> return
}
matrix.postRotate(-rotationDegrees.toFloat(), centerX, centerY)
// Finally, apply transformations to our TextureView
viewFinder.setTransform(matrix)
}
This code is used to allow for compensation when the orientation of the device changes to ensure that the view finder is kept in an upright position. You can see that here we fetch the center of the view finder, make use of the view finder to receive the current rotation of its display property, followed by transforming the view finder using the rotation calculated from the above values.
Whilst this helps us to account of these orientation changes on the device, there are other things that we need to account for when using the Preview use case. There may be cases in which we want to account for 180-degree device rotations, or even non-square viewfinders where the aspect ratio changes based on the device orientation. Here we require a little bit more logic to be able to account for these situations. If you’ve looked at the official CameraX sample, you may have spotted that there is a method being used to account for these requirements:
private fun updateTransform(textureView: TextureView?, rotation: Int?, newBufferDimens: Size, newViewFinderDimens: Size) {
val textureView = textureView ?: return
if (rotation == viewFinderRotation &&
Objects.equals(newBufferDimens, bufferDimens) &&
Objects.equals(newViewFinderDimens, viewFinderDimens)) {
// Nothing has changed, no need to transform output again
return
}
if (rotation == null) {
// Invalid rotation - wait for valid inputs before setting matrix
return
} else {
// Update internal field with new inputs
viewFinderRotation = rotation
}
if (newBufferDimens.width == 0 || newBufferDimens.height == 0) {
// Invalid buffer dimens - wait for valid inputs before setting matrix
return
} else {
// Update internal field with new inputs
bufferDimens = newBufferDimens
}
if (newViewFinderDimens.width == 0 || newViewFinderDimens.height == 0) {
// Invalid view finder dimens - wait for valid inputs before setting matrix
return
} else {
// Update internal field with new inputs
viewFinderDimens = newViewFinderDimens
}
val matrix = Matrix()
// Compute the center of the view finder
val centerX = viewFinderDimens.width / 2f
val centerY = viewFinderDimens.height / 2f
// Correct preview output to account for display rotation
matrix.postRotate(-viewFinderRotation!!.toFloat(), centerX, centerY)
// Buffers are rotated relative to the device's 'natural' orientation: swap width and height
val bufferRatio = bufferDimens.height / bufferDimens.width.toFloat()
val scaledWidth: Int
val scaledHeight: Int
// Match longest sides together -- i.e. apply center-crop transformation
if (viewFinderDimens.width > viewFinderDimens.height) {
scaledHeight = viewFinderDimens.width
scaledWidth = Math.round(viewFinderDimens.width * bufferRatio)
} else {
scaledHeight = viewFinderDimens.height
scaledWidth = Math.round(viewFinderDimens.height * bufferRatio)
}
// Compute the relative scale value
val xScale = scaledWidth / viewFinderDimens.width.toFloat()
val yScale = scaledHeight / viewFinderDimens.height.toFloat()
// Scale input buffers to fill the view finder
matrix.preScale(xScale, yScale, centerX, centerY)
// Finally, apply transformations to our TextureView
textureView.setTransform(matrix)
}
Whilst this looks like a lot, we actually need this to ensure that our preview is correctly displayed within our TextureView instance. For example, we need to take into account the device orientation / screen rotations. For this reason, this updateTransform() helps us to ensure that the view finder content is displayed correctly.
I don’t want to run through the above too much at this point, but its taking a similar approach to the first updateTransform() method that we saw. What this one does is:
- Retrieve the dimensions and rotation of the viewfinder
- Calculate the center of the viewfinder
- Rotate the content based on the calculated rotation
- Scale the content of the viewfinder based on the calculated scale for each the x and y axis
Now that we have these updateTransform() methods, we’ll want to call either of these from within the OnPreviewOutputUpdateListener callback. And once this is done, we can bind our view finder to the currently lifecycle of our application so that the camera will automatically stream to our surface texture once the lifecycle becomes active.
CameraX.bindToLifecycle(this, viewFinderUseCase)
Image Capture
The ImageCapture use case can be used for capturing images using the device camera. This use case will take the photo and provide the image data, for which you are responsible for handling as desired. Like the previous use case, we need to begin by creating the configuration that we wish to use for our image capture, in the form of an ImageCaptureConfig instance:
val imageCaptureConfig = ImageCaptureConfig.Builder().apply {
setFlashMode(FlashMode.AUTO)
}.build()
There are a number of options that we can apply when configuring this instance of our ImageCaptureConfig:
- setFlashMode() – Sets the state of the flash for the image capture using a FlashMode value. This can be one of either AUTO, ON or OFF.
- setLensFacing() – Sets the lens to be used for the image capture using a LensFacing value. This can be either FRONT or BACK.setCaptureMode() – Sets the priority in terms of quality / latency during image capture, using a CaptureMode value. This can be set to either MAX_QUALITY (to prioritise the image quality over any latency that may occur, which can make images take longer to capture) or MIN_LATENCY.
- setTargetAspectRatio() – Pass a Rational instance which will be used to assign an aspect ratio for images captured through this configuration
- setTargetRotation() – Pass a Surface rotation value which will be used to set the rotation value for images that are captured through this configuration. This can be set to either Surface.ROTATION_0, Surface.ROTATION_90, Surface.ROTATION_180 or Surface.ROTATION_270. When setting this you are able to make use of the Display instance from the View Finder TextureView. This Display holds a reference to a rotation property in the form of a Surface instance, which can be used to set the target rotation of your ImageCapture use case.
We begin by creating a new instance of the ImageCaptureUseCase class, passing in our configuration that we just defined. And just like previously, we bind it to the current lifecycle.
val imageCaptureUseCase = ImageCapture(imageCaptureConfig)
CameraX.bindToLifecycle(this, imageCaptureUseCase)
Now that our image capture use case is defined and ready for use, we can go ahead and capture an image using it. Here we have access to a takePicture method that can be used to capture an image from the camera. Here we need to pass in a file reference for where the image data should be saved to, along with a listener for when the image has either been saved successfully or when an error has occurred.
imageCaptureUseCase.takePicture(File("some_file_path"),
object : ImageCaptureUseCase.OnImageSavedListener {
override fun onImageSaved(file: File) {
// Handle image saved
}
override fun onError(
error: ImageCapture.UseCaseError,
message: String,
throwable: Throwable?
) {
// Handle image error
}
})
When the image has been captured and saved succesfully we receive a File instance within the onImageSaved callback. On the other hand, when something goes wrong we receive a ImageCapture.UseCaseError reference which lets us know where our image capture went wrong. Currently this will return us either the value of FILE_IO_ERROR or UNKNOWN_ERROR.
There is an alternative to this method where we can pass in an instance of the ImageCapture.MetaData class as the final argument. This allows us to pass some extra details regarding the image, such as:
- location: details regarding the geographic location of the image
- isReveredHorizontal: states whether the image is reversed horizontally
- isReversedVertical: states whether the image is reversed vertically
Finally, there is another format of the takePicture() method which just takes an instance of the OnImageCapturedListener callback as its only argument. This can be used to listen for when images are captured (or an error occurs) and then handle the result data accordingly. Whilst the previous takePicture() method provided a simpler approach, this gives you more flexibility when handling image captures.
imageCaptureUseCase.takePicture(object : ImageCapture.OnImageCapturedListener() {
override fun onCaptureSuccess(
image: ImageProxy,
rotationDegrees: Int
) {
// Handle image captured
}
override fun onError(
error: ImageCapture.UseCaseError?,
message: String?,
cause: Throwable?
) {
// Handle image capture error
}
})
When using this approach, the image is available in memory – this means that unlike the previous approach, the image is not saved. If you wish to save the image to the device then this will be your responsibility to do so. This will be more desirable in cases where you may wish to either perform operations on an image before saving, or just simply don’t need to save the image to the device.
Once the above is done, we can bind our view finder to the currently lifecycle of our application so that the camera will automatically stream to our surface texture once the lifecycle becomes active.
CameraX.bindToLifecycle(this, viewFinderUseCase)
Image Analysis
Finally, we have the ImageAnalysis use case which can be used for performing analysis on the images being shown within the camera feed. This use case allows us to extend from it to create our own custom analysis class, allowing us to perform specific analysis operations on camera media. Like the previous use case, we need to begin by creating the configuration that we wish to use for our image analysis, in the form of an ImageAnalysisConfig instance:
val imageAnalysisConfig = ImageAnalysisConfig.Builder().apply { setImageReaderMode( ImageAnalysis.ImageReaderMode.ACQUIRE_LATEST_IMAGE) }.build()
- setImageReaderMode() – Sets the method used for acquiring an image to analysis from the media queue. This can be either ACQUIRE_LATEST_IMAGE (use the latest image from the media queue, at the same time discarding any images that are older than the latest) or ACQUIRE_NEXT_IMAGE (use the next image from the media queue)
- setImageQueueDepth() – Set the number of images that are to be available to the camera pipeline
- setLensFacing() – Sets the lens which is to be used for the viewfinder in the form of a LensFacing value. This can be one of either LensFacing.FRONT or LensFacing.BACK
- setCallbackHandler() – Provide a handler to be used as a default for any callbacks
For the analysis we need to create our own analysis class that extends from the ImageAnalysis.Analyzer interface. This provides us with an analyze() method that we must override and provide the implementation for our analysis within. For example, we could write an Analyzer class to analysis the luminosity of a given image:
private class LuminosityAnalyzer : ImageAnalysis.Analyzer {
private var lastAnalyzedTimestamp = 0L
private fun ByteBuffer.toByteArray(): ByteArray {
rewind()
val data = ByteArray(remaining())
get(data)
return data
}
override fun analyze(image: ImageProxy, rotationDegrees: Int) {
val currentTimestamp = System.currentTimeMillis()
if (currentTimestamp - lastAnalyzedTimestamp >=
TimeUnit.SECONDS.toMillis(1)) {
val buffer = image.planes[0].buffer
val data = buffer.toByteArray()
val pixels = data.map { it.toInt() and 0xFF }
val luma = pixels.average()
Log.d("CameraXApp", "Average luminosity: $luma")
lastAnalyzedTimestamp = currentTimestamp
}
}
}
As you can see from this small example, the analysis use case allows us to provide perform custom analysis on images whilst also taking advantage of the benefits that CameraX brings us.
When our analysis class is created we can assign it as the analyzer for our use case:
val imageAnalysisUseCase = ImageAnalysis(imageAnalysisConfig).apply { analyzer = MyAnalyser() }
And once the above is done, we can bind our view finder to the currently lifecycle of our application so that the camera will automatically stream to our surface texture once the lifecycle becomes active.
CameraX.bindToLifecycle(this, imageAnalysisUseCase)
Binding multiple use cases
When using the CameraX core library, we may wish to make use of more than one use case. For example, we may wish to capture an image and also perform analysis on the captured media. For this, we can simply define the use cases like we have throughout this article and then bind them all to the current lifecycle:
CameraX.bindToLifecycle(this, imageCaptureUseCase, imageAnalysisUseCase)
Whilst this is just a convenience method, it provides us a way to avoid having to bind each of our uses cases within a single operation, which could get fairly lengthy if we have many use cases implemented.
In this article we’ve explored the core component of CameraX and dived into each of the use case classes that it provides for us. These use cases aim to simplify camera development within android apps, helping to provide both a consistent developer and user experience across devices. How do you plan on using the CameraX library? And do you have any questions before planning on doing so? Please feel free to reach out!
In my next article I’ll be writing about Android Preferences from Jetpack using the Google Assistant. Be sure to follow me on Twitter to keep up-to-date with when this is available to read!
[twitter-follow screen_name=’hitherejoe’]