I've trained a .pb object detection model in python using Colab and converted it to the model.json format using the TensorFlow converter. I need to load this model inside the browser (no Node.js!) and run inference there.
This is the structure of my model folder the TensorFlow converter produced:
model
| - model.json
| - labels.json
| - group1-shard1of2.bin
| - group1-shard2of2.bin
The Tensorflow documentation suggest the following to load such a model:
const model = await tf.loadGraphModel('model/model.json');
or
const model = await tf.loadLayersModel('model/model.json');
I am using the tf.loadGraphModel
function. Loading the model works flawlessly, but when I try to run inference with it using this code:
// detect objects in the image.
const img = document.getElementById('img');
model.predict(img).then(predictions => {
console.log('Predictions: ');
console.log(predictions);
});
it throws the following error:
Uncaught (in promise) Error: The dict provided in model.execute(dict) has keys [...] that are not part of graph
at e.t.checkInputs (graph_executor.js:607)
at e.t.execute (graph_executor.js:193)
at e.t.execute (graph_model.js:338)
at e.t.predict (graph_model.js:291)
at predictImages (detector.php:39)
Am I using the wrong loading function, did the model loading process fail (even though it didn't throw any errors?) or is the inference function wrong?
Thanks in advance for your support!
EDIT: After using @edkeveked's suggestion to convert the image to a tensor first using this code:
const tensor = tf.browser.fromPixels(img);
and running inference using this:
model.predict(tensor.expandDims(0));
I got this error message:
Uncaught (in promise) Error: This execution contains the node 'Preprocessor/map/while/Exit_2', which has the dynamic op 'Exit'. Please use model.executeAsync() instead. Alternatively, to avoid the dynamic ops, specify the inputs [Preprocessor/map/TensorArrayStack/TensorArrayGatherV3]
at e.t.compile (graph_executor.js:162)
at e.t.execute (graph_executor.js:212)
at e.t.execute (graph_model.js:338)
at e.t.predict (graph_model.js:291)
at predictImages (detector.php:38)
After replacing model.predict()
with model.executeAsync()
, it returned a result that was not what I expected to get from an object detection model:
detector.php:40 (2)?[e, e]0: e?{kept: false, isDisposedInternal: false, shape: Array(3), dtype: "float32", size: 3834,?…}1: e?{kept: false, isDisposedInternal: false, shape: Array(4), dtype: "float32", size: 7668,?…}length: 2__proto__: Array(0)
This is my complete code so far (images added in HTML using PHP):
<script src="https://cdn.jsdelivr.net/npm/@tensorflow/[email protected]/dist/tf.min.js"></script>
<!-- Load the coco-ssd model. -->
<script src="https://cdn.jsdelivr.net/npm/@tensorflow-models/[email protected]"></script>
<script>
async function predictImages() { // async
console.log("loading model");
// Load the model.
const model = await tf.loadGraphModel('model/model.json');
console.log("model loaded.");
// predict for all images
for (let i = 0; i <= 4; i++) {
const img = document.getElementById('img' + i); // check if image exists if (img
if (img != null) {
console.log("doc exists: " + 'img' + i);
const tensor = tf.browser.fromPixels(img);
model.executeAsync(tensor.expandDims(0)).then(predictions => {
console.log('Predictions: ');
console.log(predictions);
});
} else {
break;
}
}
}
predictImages();
</script>