Stay organized with collections Save and categorize content based on your preferences.
You can use ML Kit to recognize entities in an image and label them. This API supports a wide range of custom image classification models. Please refer to Custom models with ML Kit for guidance on model compatibility requirements, where to find pre-trained models, and how to train your own models.
There are two ways to integrate image labeling with custom models: by bundling the pipeline as part of your app, or by using an unbundled pipeline that depends on Google Play Services. If you select the unbundled pipeline, your app will be smaller. See the table below for details.
Bundled Unbundled Library namecom.google.mlkit:image-labeling-custom
com.google.android.gms:play-services-mlkit-image-labeling-custom
Implementation Pipeline is statically linked to your app at build time. Pipeline is dynamically downloaded via Google Play Services. App size About 3.8 MB size increase. About 200 KB size increase. Initialization time Pipeline is available immediately. Might have to wait for pipeline to download before first use. API lifecycle stage General Availability (GA) Beta Note: The unbundled version of image labeling with custom models is currently offered in beta, which means it might change in backward-incompatible ways and is not subject to any SLA or deprecation policy.
There are two ways to integrate a custom model: bundle the model by putting it inside your app’s asset folder, or dynamically download it from Firebase. The following table compares these two options.
Bundled Model Hosted Model The model is part of your app's APK, which increases its size. The model is not part your APK. It is hosted by uploading to Firebase Machine Learning. The model is available immediately, even when the Android device is offline The model is downloaded on demand No need for a Firebase project Requires a Firebase project You must republish your app to update the model Push model updates without republishing your app No built-in A/B testing Easy A/B testing with Firebase Remote Config Try it outIn your project-level build.gradle
file, make sure to include Google's Maven repository in both your buildscript
and allprojects
sections.
Add the dependencies for the ML Kit Android libraries to your module's app-level gradle file, which is usually app/build.gradle
. Choose one of the following dependencies based on your needs:
For bundling the pipeline with your app:
dependencies {
// ...
// Use this dependency to bundle the pipeline with your app
implementation 'com.google.mlkit:image-labeling-custom:17.0.3'
}
For using the pipeline in Google Play Services:
dependencies {
// ...
// Use this dependency to use the dynamically downloaded pipeline in Google Play Services
implementation 'com.google.android.gms:play-services-mlkit-image-labeling-custom:16.0.0-beta5'
}
If you choose to use the pipeline in Google Play Services, you can configure your app to automatically download the pipeline to the device after your app is installed from the Play Store. To do so, add the following declaration to your app's AndroidManifest.xml
file:
<application ...>
...
<meta-data
android:name="com.google.mlkit.vision.DEPENDENCIES"
android:value="custom_ica" />
<!-- To use multiple downloads: android:value="custom_ica,download2,download3" -->
</application>
You can also explicitly check the pipeline availability and request download through Google Play services ModuleInstallClient API.
If you don't enable install-time pipeline downloads or request explicit download, the pipeline is downloaded the first time you run the labeler. Requests you make before the download has completed produce no results.
Add the linkFirebase
dependency if you want to dynamically downloading a model from Firebase:
For dynamically downloading a model from Firebase, add the linkFirebase
dependency:
dependencies {
// ...
// Image labeling feature with model downloaded from Firebase
implementation 'com.google.mlkit:image-labeling-custom:17.0.3'
// Or use the dynamically downloaded pipeline in Google Play Services
// implementation 'com.google.android.gms:play-services-mlkit-image-labeling-custom:16.0.0-beta5'
implementation 'com.google.mlkit:linkfirebase:17.0.0'
}
If you want to download a model, make sure you add Firebase to your Android project, if you have not already done so. This is not required when you bundle the model.
To bundle the model with your app:
Copy the model file (usually ending in .tflite
or .lite
) to your app's assets/
folder. (You might need to create the folder first by right-clicking the app/
folder, then clicking New > Folder > Assets Folder.)
Then, add the following to your app's build.gradle
file to ensure Gradle doesn’t compress the model file when building the app:
android {
// ...
aaptOptions {
noCompress "tflite"
// or noCompress "lite"
}
}
The model file will be included in the app package and available to ML Kit as a raw asset.
Note: starting from version 4.1 of the Android Gradle plugin, .tflite will be added to the noCompress list by default and the above is not needed anymore.Create LocalModel
object, specifying the path to the model file:
val localModel = LocalModel.Builder() .setAssetFilePath("model.tflite") // or .setAbsoluteFilePath(absolute file path to model file) // or .setUri(URI to model file) .build()Java
LocalModel localModel = new LocalModel.Builder() .setAssetFilePath("model.tflite") // or .setAbsoluteFilePath(absolute file path to model file) // or .setUri(URI to model file) .build();Note: apps target Android 11 (API level 30) can no longer access files from external storage because of Storage updates in Android 11. Note: if you have a model that was trained with AutoML Vision Edge in Firebase (not Google Cloud), the above may not work. Please follow the migration guide for instructions.
To use the remotely-hosted model, create a RemoteModel
object by FirebaseModelSource
, specifying the name you assigned the model when you published it:
// Specify the name you assigned in the Firebase console. val remoteModel = CustomRemoteModel .Builder(FirebaseModelSource.Builder("your_model_name").build()) .build()Java
// Specify the name you assigned in the Firebase console. CustomRemoteModel remoteModel = new CustomRemoteModel .Builder(new FirebaseModelSource.Builder("your_model_name").build()) .build();
Then, start the model download task, specifying the conditions under which you want to allow downloading. If the model isn't on the device, or if a newer version of the model is available, the task will asynchronously download the model from Firebase:
Kotlinval downloadConditions = DownloadConditions.Builder() .requireWifi() .build() RemoteModelManager.getInstance().download(remoteModel, downloadConditions) .addOnSuccessListener { // Success. }Java
DownloadConditions downloadConditions = new DownloadConditions.Builder() .requireWifi() .build(); RemoteModelManager.getInstance().download(remoteModel, downloadConditions) .addOnSuccessListener(new OnSuccessListener() { @Override public void onSuccess(@NonNull Task task) { // Success. } });
Many apps start the download task in their initialization code, but you can do so at any point before you need to use the model.
Configure the image labelerAfter you configure your model sources, create an ImageLabeler
object from one of them.
The following options are available:
OptionsconfidenceThreshold
Minimum confidence score of detected labels. If not set, any classifier threshold specified by the model’s metadata will be used. If the model does not contain any metadata or the metadata does not specify a classifier threshold, a default threshold of 0.0 will be used.
maxResultCount
Maximum number of labels to return. If not set, the default value of 10 will be used.
If you only have a locally-bundled model, just create a labeler from your LocalModel
object:
val customImageLabelerOptions = CustomImageLabelerOptions.Builder(localModel) .setConfidenceThreshold(0.5f) .setMaxResultCount(5) .build() val labeler = ImageLabeling.getClient(customImageLabelerOptions)Java
CustomImageLabelerOptions customImageLabelerOptions = new CustomImageLabelerOptions.Builder(localModel) .setConfidenceThreshold(0.5f) .setMaxResultCount(5) .build(); ImageLabeler labeler = ImageLabeling.getClient(customImageLabelerOptions);
If you have a remotely-hosted model, you will have to check that it has been downloaded before you run it. You can check the status of the model download task using the model manager's isModelDownloaded()
method.
Although you only have to confirm this before running the labeler, if you have both a remotely-hosted model and a locally-bundled model, it might make sense to perform this check when instantiating the image labeler: create a labeler from the remote model if it's been downloaded, and from the local model otherwise.
KotlinRemoteModelManager.getInstance().isModelDownloaded(remoteModel) .addOnSuccessListener { isDownloaded -> val optionsBuilder = if (isDownloaded) { CustomImageLabelerOptions.Builder(remoteModel) } else { CustomImageLabelerOptions.Builder(localModel) } val options = optionsBuilder .setConfidenceThreshold(0.5f) .setMaxResultCount(5) .build() val labeler = ImageLabeling.getClient(options) }Java
RemoteModelManager.getInstance().isModelDownloaded(remoteModel) .addOnSuccessListener(new OnSuccessListener() { @Override public void onSuccess(Boolean isDownloaded) { CustomImageLabelerOptions.Builder optionsBuilder; if (isDownloaded) { optionsBuilder = new CustomImageLabelerOptions.Builder(remoteModel); } else { optionsBuilder = new CustomImageLabelerOptions.Builder(localModel); } CustomImageLabelerOptions options = optionsBuilder .setConfidenceThreshold(0.5f) .setMaxResultCount(5) .build(); ImageLabeler labeler = ImageLabeling.getClient(options); } });
If you only have a remotely-hosted model, you should disable model-related functionality—for example, grey-out or hide part of your UI—until you confirm the model has been downloaded. You can do so by attaching a listener to the model manager's download()
method:
RemoteModelManager.getInstance().download(remoteModel, conditions) .addOnSuccessListener { // Download complete. Depending on your app, you could enable the ML // feature, or switch from the local model to the remote model, etc. }Java
RemoteModelManager.getInstance().download(remoteModel, conditions) .addOnSuccessListener(new OnSuccessListener() { @Override public void onSuccess(Void v) { // Download complete. Depending on your app, you could enable // the ML feature, or switch from the local model to the remote // model, etc. } });2. Prepare the input image
Then, for each image you want to label, create an
InputImage
object from your image. The image labeler runs fastest when you use a
Bitmap
or, if you use the camera2 API, a YUV_420_888
media.Image
, which are recommended when possible.
You can create an InputImage
object from different sources, each is explained below.
media.Image
To create an InputImage
object from a media.Image
object, such as when you capture an image from a device's camera, pass the media.Image
object and the image's rotation to InputImage.fromMediaImage()
.
If you use the CameraX library, the OnImageCapturedListener
and ImageAnalysis.Analyzer
classes calculate the rotation value for you.
private class YourImageAnalyzer : ImageAnalysis.Analyzer { override fun analyze(imageProxy: ImageProxy) { val mediaImage = imageProxy.image if (mediaImage != null) { val image = InputImage.fromMediaImage(mediaImage, imageProxy.imageInfo.rotationDegrees) // Pass image to an ML Kit Vision API // ... } } }Java
private class YourAnalyzer implements ImageAnalysis.Analyzer { @Override public void analyze(ImageProxy imageProxy) { Image mediaImage = imageProxy.getImage(); if (mediaImage != null) { InputImage image = InputImage.fromMediaImage(mediaImage, imageProxy.getImageInfo().getRotationDegrees()); // Pass image to an ML Kit Vision API // ... } } }
If you don't use a camera library that gives you the image's rotation degree, you can calculate it from the device's rotation degree and the orientation of camera sensor in the device:
Kotlinprivate val ORIENTATIONS = SparseIntArray() init { ORIENTATIONS.append(Surface.ROTATION_0, 0) ORIENTATIONS.append(Surface.ROTATION_90, 90) ORIENTATIONS.append(Surface.ROTATION_180, 180) ORIENTATIONS.append(Surface.ROTATION_270, 270) } /** * Get the angle by which an image must be rotated given the device's current * orientation. */ @RequiresApi(api = Build.VERSION_CODES.LOLLIPOP) @Throws(CameraAccessException::class) private fun getRotationCompensation(cameraId: String, activity: Activity, isFrontFacing: Boolean): Int { // Get the device's current rotation relative to its "native" orientation. // Then, from the ORIENTATIONS table, look up the angle the image must be // rotated to compensate for the device's rotation. val deviceRotation = activity.windowManager.defaultDisplay.rotation var rotationCompensation = ORIENTATIONS.get(deviceRotation) // Get the device's sensor orientation. val cameraManager = activity.getSystemService(CAMERA_SERVICE) as CameraManager val sensorOrientation = cameraManager .getCameraCharacteristics(cameraId) .get(CameraCharacteristics.SENSOR_ORIENTATION)!! if (isFrontFacing) { rotationCompensation = (sensorOrientation + rotationCompensation) % 360 } else { // back-facing rotationCompensation = (sensorOrientation - rotationCompensation + 360) % 360 } return rotationCompensation }Java
private static final SparseIntArray ORIENTATIONS = new SparseIntArray(); static { ORIENTATIONS.append(Surface.ROTATION_0, 0); ORIENTATIONS.append(Surface.ROTATION_90, 90); ORIENTATIONS.append(Surface.ROTATION_180, 180); ORIENTATIONS.append(Surface.ROTATION_270, 270); } /** * Get the angle by which an image must be rotated given the device's current * orientation. */ @RequiresApi(api = Build.VERSION_CODES.LOLLIPOP) private int getRotationCompensation(String cameraId, Activity activity, boolean isFrontFacing) throws CameraAccessException { // Get the device's current rotation relative to its "native" orientation. // Then, from the ORIENTATIONS table, look up the angle the image must be // rotated to compensate for the device's rotation. int deviceRotation = activity.getWindowManager().getDefaultDisplay().getRotation(); int rotationCompensation = ORIENTATIONS.get(deviceRotation); // Get the device's sensor orientation. CameraManager cameraManager = (CameraManager) activity.getSystemService(CAMERA_SERVICE); int sensorOrientation = cameraManager .getCameraCharacteristics(cameraId) .get(CameraCharacteristics.SENSOR_ORIENTATION); if (isFrontFacing) { rotationCompensation = (sensorOrientation + rotationCompensation) % 360; } else { // back-facing rotationCompensation = (sensorOrientation - rotationCompensation + 360) % 360; } return rotationCompensation; }
Then, pass the media.Image
object and the rotation degree value to InputImage.fromMediaImage()
:
val image = InputImage.fromMediaImage(mediaImage, rotation)Java
InputImage image = InputImage.fromMediaImage(mediaImage, rotation);Using a file URI
To create an InputImage
object from a file URI, pass the app context and file URI to InputImage.fromFilePath()
. This is useful when you use an ACTION_GET_CONTENT
intent to prompt the user to select an image from their gallery app.
val image: InputImage try { image = InputImage.fromFilePath(context, uri) } catch (e: IOException) { e.printStackTrace() }Java
InputImage image; try { image = InputImage.fromFilePath(context, uri); } catch (IOException e) { e.printStackTrace(); }Using a
ByteBuffer
or ByteArray
To create an InputImage
object from a ByteBuffer
or a ByteArray
, first calculate the image rotation degree as previously described for media.Image
input. Then, create the InputImage
object with the buffer or array, together with image's height, width, color encoding format, and rotation degree:
val image = InputImage.fromByteBuffer( byteBuffer, /* image width */ 480, /* image height */ 360, rotationDegrees, InputImage.IMAGE_FORMAT_NV21 // or IMAGE_FORMAT_YV12 ) // Or: val image = InputImage.fromByteArray( byteArray, /* image width */ 480, /* image height */ 360, rotationDegrees, InputImage.IMAGE_FORMAT_NV21 // or IMAGE_FORMAT_YV12 )Java
InputImage image = InputImage.fromByteBuffer(byteBuffer, /* image width */ 480, /* image height */ 360, rotationDegrees, InputImage.IMAGE_FORMAT_NV21 // or IMAGE_FORMAT_YV12 ); // Or: InputImage image = InputImage.fromByteArray( byteArray, /* image width */480, /* image height */360, rotation, InputImage.IMAGE_FORMAT_NV21 // or IMAGE_FORMAT_YV12 );Using a
Bitmap
To create an InputImage
object from a Bitmap
object, make the following declaration:
val image = InputImage.fromBitmap(bitmap, 0)Java
InputImage image = InputImage.fromBitmap(bitmap, rotationDegree);
The image is represented by a Bitmap
object together with rotation degrees.
To label objects in an image, pass the image
object to the ImageLabeler
's process()
method.
labeler.process(image) .addOnSuccessListener { labels -> // Task completed successfully // ... } .addOnFailureListener { e -> // Task failed with an exception // ... }Java
labeler.process(image) .addOnSuccessListener(new OnSuccessListener<List<ImageLabel>>() { @Override public void onSuccess(List<ImageLabel> labels) { // Task completed successfully // ... } }) .addOnFailureListener(new OnFailureListener() { @Override public void onFailure(@NonNull Exception e) { // Task failed with an exception // ... } });Note: If you are using the
CameraX
API, make sure to close the ImageProxy
when finish using it, e.g., by adding an OnCompleteListener
to the Task
returned from the process
method. See the VisionProcessorBase
class in the quickstart sample app for an example. 4. Get information about labeled entities
If the image labeling operation succeeds, a list of
ImageLabel
objects is passed to the success listener. Each
ImageLabel
object represents something that was labeled in the image. You can get each label's text description (if available in the metadata of the TensorFlow Lite model file), confidence score, and index. For example:
Kotlinfor (label in labels) { val text = label.text val confidence = label.confidence val index = label.index }Java
for (ImageLabel label : labels) { String text = label.getText(); float confidence = label.getConfidence(); int index = label.getIndex(); }Note: if you use a TensorFlow Lite model that is incompatible with ML Kit, you will get an
MlKitException
with error code MlKitException#INVALID_ARGUMENT
and some details about why it is not compatible. See Custom Model Compatibility Requirements for details. Tips to improve real-time performance
If you want to label images in a real-time application, follow these guidelines to achieve the best frame rates:
Camera
or camera2
API, throttle calls to the image labeler. If a new video frame becomes available while the image labeler is running, drop the frame. See the VisionProcessorBase
class in the quickstart sample app for an example.CameraX
API, be sure that backpressure strategy is set to its default value ImageAnalysis.STRATEGY_KEEP_ONLY_LATEST
. This guarantees only one image will be delivered for analysis at a time. If more images are produced when the analyzer is busy, they will be dropped automatically and not queued for delivery. Once the image being analyzed is closed by calling ImageProxy.close(), the next latest image will be delivered.CameraSourcePreview
and GraphicOverlay
classes in the quickstart sample app for an example.ImageFormat.YUV_420_888
format. If you use the older Camera API, capture images in ImageFormat.NV21
format.Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License. For details, see the Google Developers Site Policies. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-08-14 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-14 UTC."],[[["ML Kit provides two ways to integrate custom image labeling models: bundled (in-app) and unbundled (downloaded)."],["Custom models can be bundled within the app or hosted on Firebase Machine Learning for dynamic updates."],["The library offers various options for loading images, including from files, bitmaps, and camera streams."],["When using hosted models, ensure they are downloaded before processing images by using the `isModelDownloaded()` method."],["For optimal performance, consider input image format and rotation, and throttle calls to the image labeler in real-time applications."]]],[]]
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.4