AutoML Vision Edge: Loading and Running a TensorFlow.js Model (Part 1)

Aayush Arora
Heartbeat
Published in
5 min readJan 24, 2020

--

This is the third post in the Google Cloud AutoML Vision Edge Series. In the first post, we trained and used a .tflite format of the AutoML model. In the second post, we used the .pb file or the TF Saved Model format provided by AutoML using Python.

As we’ve seen, the .tflite model is fast and light but less accurate than the .pb model, which is a bit more complex but provides us with great results.

You can find the previous posts of the series here:

This post will help you understand and infer the TensorFlow.js model format provided by AutoML.

Note: You can also use the AutoML trained model in various formats and the Tf.js format is used to run them inside the browser or a Javascript-based server.

Since the release of TensorFlow.js, developers have tackled plenty of use cases with ML in browsers, whether it’s automating games for fun or detecting human poses in real-time, the use cases of machine learning in browsers are limitless.

TensorFlow.js is a library that helps to use ML directly in the browser. AutoML takes very little time to create a model and TensorFlow.js is the easiest and the most efficient way to run models directly inside the browser.

Why ML in Browser?

It’s an interesting question with various opinions like a browser is widely present in all the devices whether it is desktop, tablets or mobiles. So, ML in browsers can create a large impact.

From the user’s perspective, the ML model can be easily set up in a browser without any drivers, libraries, etc. As a matter of fact, all the data stays on the user’s device, with no need to send data to servers for low-latency inference. With the support of WebGL, the future of TensorFlow.js is very promising.

The process of using the AutoML’s TensorFlow.js model format requires two steps: exporting the model and then loading it for inference.

1. Exporting the Model

Just as in the first post of this series, we’ve trained a flower classification model on AutoML. Once you’re done training a model in AutoML, AutoML offers you different options to use the model, and one of them is using it in browsers — TensorFlow.js models.

Once the model is exported, you’ll see a model.json file, which contains the tensor information along with the weight file names and .bin files containing the model weights. Download all the files from the bucket into your local system.

Here we are using a flower classification model that we trained on AutoML. You can easily train your models by referring to the first post of this series here.

2. Loading the Model in the Browser

The following step-by-step instructions will help you load any TF.js-based model into the browser.

2.1 Loading the required Scripts

The tfjs-automl and tfjs scripts contain the functions we’re going to use. If you want to use the model offline, you can download a copy of these scripts and include them in your html file:

<script src="https://unpkg.com/@tensorflow/tfjs"></script>
<script src="https://unpkg.com/@tensorflow/tfjs-automl"></script>

2.2 Creating the Image DOM element

Next, we have to create the image DOM element in the html file on which we’ll run the prediction. Replace the src with the your image path. You can fetch the image from the web or use a local image with your local OS path:

<img id="daisy" crossorigin="anonymous" src="https://storage.googleapis.com/tfjs-testing/tfjs-automl/img_classification/daisy.jpg">

2.3. Understanding loadImageClassification and classify functions

<script>
async function run() {
const model = await tf.automl.loadImageClassification('model.json');
const image = document.getElementById('daisy');
const predictions = await model.classify(image);
console.log(predictions);
}
run();
</script>

The loadImageClassification function loads the model at runtime using model.json.

The model.json file contains information about all the .bin model files.
Loading this file automatically loads the model in TensorFlow.js

The classify function can be applied to the imageDOM element. That means you just have to replace the image DOM element here:
const image = document.getElementById('daisy'); to get the image prediction.

Based on the training, the classify function returns a JSON file of the classes along with their confidence scores.

Depending on what type of model you’ve trained, the output JSON will look something like this:

[
{ label: 'daisy', prob:0.9107792377471924 }
{ label: 'rose', prob:0.08922076225 }
]

2.4. Setting a threshold value

Next, you can set a threshold value for the prob score in order to get the correct label with a certain confidence score.

predictions.forEach((ele, index)=>{
if(ele.prob > .90){
console.log(predictions[index])
}
else {
console.log("Model is not confident enough")
}
})

Conclusion

In this post, we followed a step-by-step approach to load a TensorFlow.js model into the browser to predict an image of a flower. We also have set a threshold value to predict a flower only if the model outputs a confidence score higher than 90%.

If you want to explore other use cases, you can also check out some interesting examples from the TensorFlow.js website.

TensorFlow.js models can also be used in the backend with Node.js, allowing you to deploy them using a server—and this is what we’ll be exploring in the next part of this series.

If you liked the article, please clap your heart out. Tip — Your 50 claps will make my day!

Want to know more about me? Please check out my website. If you’d like to get updates, follow me on Twitter and Medium. If anything isn’t clear or you want to point out something, please comment down below.

Editor’s Note: Heartbeat is a contributor-driven online publication and community dedicated to providing premier educational resources for data science, machine learning, and deep learning practitioners. We’re committed to supporting and inspiring developers and engineers from all walks of life.

Editorially independent, Heartbeat is sponsored and published by Comet, an MLOps platform that enables data scientists & ML teams to track, compare, explain, & optimize their experiments. We pay our contributors, and we don’t sell ads.

If you’d like to contribute, head on over to our call for contributors. You can also sign up to receive our weekly newsletters (Deep Learning Weekly and the Comet Newsletter), join us on Slack, and follow Comet on Twitter and LinkedIn for resources, events, and much more that will help you build better ML models, faster.

--

--