Using CameraX in Android Jetpack

In this blog post, we’ll discuss the principles and architecture behind the CameraX Jetpack support library and how to use it. We’ll also mention the general state of Android’s camera APIs and why CameraX is so important.

Evolution of the camera APIs

When Android was launched the Camera API was included with it and allowed developers to access the camera hardware, taking pictures, recordings, and more. However, with the introduction of multi-camera support, the increase in popularity of image analysis apps, and specialized modes like HDR and night modes, the original Camera API was starting to fall short.

Therefore, with Android L (API 21) Google deprecated the original Camera API and introduced Camera2. This new API gave developers a lot finer control over the hardware and allowed us to implement complex camera app to rival the native ones available on the device. This power though came with an increase in complexity and code. Google knew this, so later they introduced the Jetpack support library called CameraX that uses the Camera2 API under the hood, but it is a lot simpler to use.


CameraX provides an easy-to-use API that works all the way back to Android L (API 21) using a use case-based approach that is also lifecycle-aware and handles compatibility issues between devices.

Use case-based approach

In CameraX the way you interact with the camera it’s through your use case scenario or what you want to do with it like:

  • Preview: showing the camera preview on your app.
  • Image analysis: use what the camera is seeing to feed your algorithms, like in machine learning.
  • Image capture: take a picture.
  • Video capture: record a video.

As you’ll see later you can initiate any of these use cases concurrently and develop complex, fully-featured applications in very few lines of code. Video capture however is not fully supported as of yet, so this is a table of the possible combinations of use cases available today.

Combinations of use cases available currently

Device compatibility

Unsurprisingly enough there is a lot of device-specific testing involved with delivering a consistent camera experience across different device manufacturers, due to the differences in API and features available on the camera modules. For that reason, the CameraX team at Google invested in an automated test lab to ensure a consistent experience of the library taking that burden off the developers.

CameraX Extensions

Another advantage of CameraX is its vendor extensions that allow you to access features like HDR, Night, and Beauty if they are available on said device and the manufacturer implemented the corresponding extension. As of 2019 companies like Samsung, Huawei, LG, and Motorola were already onboard with extensions (see here).


Add the CAMERA permission to your manifest. Also, include the WRITE_EXTERNAL_STORAGE permission if you want to be able to save image files on devices running Android 9 or previous. Lastly, include the RECORD_AUDIO permission to be able to record videos.

<uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" />
<uses-permission android:name="android.permission.CAMERA" />
<uses-permission android:name="android.permission.RECORD_AUDIO" />

Add the Google Maven repository to the build.gradle file of your project

allprojects {
repositories {

Include language support whether it is Java or Kotlin

// For Java projects
compileOptions {
sourceCompatibility JavaVersion.VERSION_1_8
targetCompatibility JavaVersion.VERSION_1_8
// For Kotlin projects
kotlinOptions {
jvmTarget = "1.8"

The dependencies

// This one is optional because it is included by camera-camera2
implementation ''
implementation ''
implementation ''
implementation ''

Request the permissions

Without the permissions mentioned earlier CameraX won’t work so you must ensure your app has them before starting any use case.

Requesting permissions


The preview use case as mentioned earlier it’s to show what the camera is seeing in your app. For it, you first need to add a PreviewView to your layout.

android:layout_height="wrap_content" />

Notice that it’s not obligatory to include PreviewView inside FrameLayout it can be any other ViewGroup.

Preview use case

Line 2–3: A Future is obtained for the CameraProvider object necessary to start any use case, including preview. A listener is added to handle the object when it is returned.

Line 5: The Preview object is constructed representing the preview use case. Notice that during its construction you can set several parameters like the targetAspectRatio of the preview, targetResolution, rotation, bit rate, etc. The CameraX library will try to fulfill these requests as close as possible based on the device’s capabilities. In this example, we are requesting a preview with an aspect ratio of 4:3.

Line 7–8: The PreviewView from the layout is retrieved and its SurfaceProvider is assigned to the Preview use case object.

Line 10: At this point, we use the CameraProvider object and bind its lifecycle to the activity or fragment where we are showing the preview. Also, we provide a CameraSelector object that abstracts the set of requirements and priorities used to select a camera or return a filtered set of cameras; in this example, we use the default back camera. Finally, we add the Preview object itself and that is all. A preview of the camera will appear in your app!

Image analysis

Image analysis

Line 5–8: Build the object ImageAnalysis that represents this use case in CameraX. In this example instead of a specific ratio we’re requesting a specific targetResolution and setting a backpressureStrategy that will determine how the library handles the delivery of frames if the analyze method isn’t ready to receive a new frame. STRATEGY_KEEP_ONLY_LATEST establishes that lost frames should be ignored receiving only the latest available, while STRATEGY_BLOCK_PRODUCER delivers them in sequential order blocking new frames from entering the pipeline while the method returns.

Line 10–16: An analyzer is connected to the ImageAnalysis object to perform your custom processing. Never forget to close the ImageProxy object like in line 14, because it can prevent the production of further images.

Line 18: The same as in the preview use case you must bind the ImageAnalysis case to the lifecycle of the activity or fragment. Notice that I’ve also included the Preview object from before to denote that several use cases can and should be bound at the same time when your app supports more than one use case scenario.

Image capture

Image capture

Line 5–9: Build the ImageCapture object representing the similar name use case of the CameraX library. Some interesting parameters to set during its creation are the targetRotationthat will modify the EXIF rotation metadata in the resulting image. Also, captureMode will determine whether to focus on quality or speed with CAPTURE_MODE_MAXIMIZE_QUALITY and CAPTURE_MODE_MINIMIZE_LATENCY respectively.

Line 11: Similar to previous use cases you must bind this one as well.

Line 13: Usually you’ll place a button in your layout, capture its onClick event and trigger the actual capture of the image, but in this example, I’m using a simple delay of two seconds.

Line 15: OutputFileOptions is an object that wraps a normal File object to set where to store the image captured, and also allows setting metadata on it. The address where I’m saving the images can be found in the Android/media/{app package name} folder in your device’s internal storage.

Line 16–26: Call takePicture whenever you want to actually perform a capture, the listener will allow you to handle what to do when the image gets saved or if an error occurs.

Video capture

As of today, video capture isn’t officially supported by CameraX. However, if you can endure suppressing some restricted APIs in your IDE and some other minor limitations, it will work!

Line 1: Some APIs are still restricted, so I’m suppressing this warning from the compiler.

Line 6: The VideoCapture object is the representation of this use case. Similar to the other use cases you can set some parameters, like bit rate, audio, aspect ratio, resolution, etc.

Line 8: The usual binding process as with any other use case. Notice though that I switched from the back camera in the other examples to the front one. Nothing important, just wanted you to see your face :)

Line 12–23: Similar to the image capture use case you must set a place to store the video,startRecording and add a listener for when it is done or if an error occurs.

Line 25: Unlike any other use case you must stop the recording so the library knows when to save the video. Here, I’m waiting for two additional seconds to stopRecording the video.


CameraX seems to be a very promising API for camera-related tasks that require more than just launching an Intent to the camera app, but for which using the overly complex Camera2 or the deprecated Camera[1] would be overkill. In fact, according to Google two-thirds of camera accesses occur from outside of the native camera app, so it couldn’t be a better time for this API.

Android developer — Writing about all things code — Machine learning and more.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store