Developers
July 17, 2020

GCP Releases a New ML Kit SDK For Mobile Development

GCP is introducing some new features and improvements that enables mobile developers to use ML Kit in an easier way. Updated SDK, no more dependency on Firebase.

Today we will talk about ML Kit. An SDK that helps mobile developers integrate Machine Learning into their apps. It originally launched in 2018 and 25,000 applications have been developed ever since.

Now, June 2020, ML KIT no longer depends on Firebase. Based on the requests from users, asking for flexibility, the new version supports on-device APIs. The version is a standalone ML Kit SDK. You can still use Firebase in your apps if you want to. This new focus changes from cloud machine learning to on-device machine learning.

ML Kit makes Google's machine learning available to mobile developers in a powerful yet easy to use package. You can develop for IOS and for Android as it supports both. The solution has been updated to it runs on the device.

It's a fast service, allowing for real-time use cases to occur. As processing happens on the device, there is no network latency. Inference can be done on images and videos. It can be done multiple times per second on text strings.

You can use the APIs when your network has no connectivity. Meaning that you can work totally offline. 

All processing is done locally, meaning that you don’t have to send sensitive user data using the network towards a server. Privacy is highly increased.

If you currently use ML Kit for Firebase's on-device APIs, it is recommended to migrate to the new version. The new version is the standalone ML Kit SDK which offers the new features and updates. There is a migration guide provided in the documentation.

Developers have requested if ML Kit can be shipped through Google Play Services. The request has been heard by Google. This results in smaller app sizes. The service has now added Face detection to the list of working APIs. 

Support to Android Jetpack Lifecycle has been added to all APIs. Developers can use “addObserver” to manage automatically the teardown of ML Kit APIs. This helps to integrate CameraX much easier. It is recommended to integrate CameraX into your apps as it is an easy integration and the image quality improves tenfold compared to Camera1. 

To help you start with the new ML Kit and integrate CameraX, there is a code lab to recognize, identify and translate language. If you want to know more about it or you have specific questions you can tag Google-mlkit in StackOverflow and you will receive answers shortly as the Google team monitors it daily.

Ml Kit provides an early access program. This program allows developers to partner with the ML Kit team and access upcoming features exclusively. Based on this program, two new APIs have been made available. The entity Extraction and the Pose Detection API.

Entity Extraction detects entities in text and make them actionable. The API has support for phone numbers, addresses, payments, tracking numbers date/time, and others. The Pose Detection is a low latency pose detection that supports 33 skeletal points. It includes hand and feet tracking. 

Vision APIs

 Vision APIs include text recognition, face recognition, and barcode scanning We will explain what all of these are following next.

Text recognition API is used to recognize text in any Latin-based character set. They are also used to automate data entry. This data usually is processing credit cards, receipts, and business cards.  

Face detection API you can detect faces in an image. It not only recognizes a face but facial features too. It doesn't recognize people like the image recognition system deployed by Facebook.

Using face detection you can get information from the face of a picture. This data can be later used for example to generate a 3D avatar of a user´s photo. Face detection is performed in real-time. 

With the barcode scanning API, you can read data that is encoded by using the standard barcode formats. The scanning of the barcodes happens on the device, without the need of a network connection.  

Barcodes are now used worldwide for many use cases. You can pass information from the real world to your app in a matter of seconds. By using QR codes, you can encode data such as contact information or network credentials. ML Kit is prepared to recognize and parse sensitive data, allowing your app to respond to barcodes. 

In conclusion, Google´s ML Kit is an SDK that works for both operating systems, IOS, and Android. The SDK recently launched an update where it no longer depends on Firebase. The service provides Machine Learning capabilities to mobile developers. All data is processed locally, meaning that sensitive information is not sent through a network. The service counts with features like text recognition, face recognition, barcode scanning, among others. If you currently develop mobile applications and you are interested in integrating machine learning capabilities, it is recommended you check out Google ML Kit.

TagsGoogle ML KitGCPMachine Learning
Lucas Bonder
Technical Writer
Lucas is an Entrepreneur, Web Developer, and Article Writer about Technology.

Related Articles

Back
DevelopersJuly 17, 2020
GCP Releases a New ML Kit SDK For Mobile Development
GCP is introducing some new features and improvements that enables mobile developers to use ML Kit in an easier way. Updated SDK, no more dependency on Firebase.

Today we will talk about ML Kit. An SDK that helps mobile developers integrate Machine Learning into their apps. It originally launched in 2018 and 25,000 applications have been developed ever since.

Now, June 2020, ML KIT no longer depends on Firebase. Based on the requests from users, asking for flexibility, the new version supports on-device APIs. The version is a standalone ML Kit SDK. You can still use Firebase in your apps if you want to. This new focus changes from cloud machine learning to on-device machine learning.

ML Kit makes Google's machine learning available to mobile developers in a powerful yet easy to use package. You can develop for IOS and for Android as it supports both. The solution has been updated to it runs on the device.

It's a fast service, allowing for real-time use cases to occur. As processing happens on the device, there is no network latency. Inference can be done on images and videos. It can be done multiple times per second on text strings.

You can use the APIs when your network has no connectivity. Meaning that you can work totally offline. 

All processing is done locally, meaning that you don’t have to send sensitive user data using the network towards a server. Privacy is highly increased.

If you currently use ML Kit for Firebase's on-device APIs, it is recommended to migrate to the new version. The new version is the standalone ML Kit SDK which offers the new features and updates. There is a migration guide provided in the documentation.

Developers have requested if ML Kit can be shipped through Google Play Services. The request has been heard by Google. This results in smaller app sizes. The service has now added Face detection to the list of working APIs. 

Support to Android Jetpack Lifecycle has been added to all APIs. Developers can use “addObserver” to manage automatically the teardown of ML Kit APIs. This helps to integrate CameraX much easier. It is recommended to integrate CameraX into your apps as it is an easy integration and the image quality improves tenfold compared to Camera1. 

To help you start with the new ML Kit and integrate CameraX, there is a code lab to recognize, identify and translate language. If you want to know more about it or you have specific questions you can tag Google-mlkit in StackOverflow and you will receive answers shortly as the Google team monitors it daily.

Ml Kit provides an early access program. This program allows developers to partner with the ML Kit team and access upcoming features exclusively. Based on this program, two new APIs have been made available. The entity Extraction and the Pose Detection API.

Entity Extraction detects entities in text and make them actionable. The API has support for phone numbers, addresses, payments, tracking numbers date/time, and others. The Pose Detection is a low latency pose detection that supports 33 skeletal points. It includes hand and feet tracking. 

Vision APIs

 Vision APIs include text recognition, face recognition, and barcode scanning We will explain what all of these are following next.

Text recognition API is used to recognize text in any Latin-based character set. They are also used to automate data entry. This data usually is processing credit cards, receipts, and business cards.  

Face detection API you can detect faces in an image. It not only recognizes a face but facial features too. It doesn't recognize people like the image recognition system deployed by Facebook.

Using face detection you can get information from the face of a picture. This data can be later used for example to generate a 3D avatar of a user´s photo. Face detection is performed in real-time. 

With the barcode scanning API, you can read data that is encoded by using the standard barcode formats. The scanning of the barcodes happens on the device, without the need of a network connection.  

Barcodes are now used worldwide for many use cases. You can pass information from the real world to your app in a matter of seconds. By using QR codes, you can encode data such as contact information or network credentials. ML Kit is prepared to recognize and parse sensitive data, allowing your app to respond to barcodes. 

In conclusion, Google´s ML Kit is an SDK that works for both operating systems, IOS, and Android. The SDK recently launched an update where it no longer depends on Firebase. The service provides Machine Learning capabilities to mobile developers. All data is processed locally, meaning that sensitive information is not sent through a network. The service counts with features like text recognition, face recognition, barcode scanning, among others. If you currently develop mobile applications and you are interested in integrating machine learning capabilities, it is recommended you check out Google ML Kit.

Google ML Kit
GCP
Machine Learning
About the author
Lucas Bonder -Technical Writer
Lucas is an Entrepreneur, Web Developer, and Article Writer about Technology.

Related Articles