Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
222 views
in Technique[技术] by (71.8m points)

javascript - TF.js load object detection model in model.json format in browser

I've trained a .pb object detection model in python using Colab and converted it to the model.json format using the TensorFlow converter. I need to load this model inside the browser (no Node.js!) and run inference there. This is the structure of my model folder the TensorFlow converter produced:

model
| - model.json
| - labels.json
| - group1-shard1of2.bin
| - group1-shard2of2.bin

The Tensorflow documentation suggest the following to load such a model:

const model = await tf.loadGraphModel('model/model.json');

or

const model = await tf.loadLayersModel('model/model.json');

I am using the tf.loadGraphModel function. Loading the model works flawlessly, but when I try to run inference with it using this code:

// detect objects in the image.
const img = document.getElementById('img'); 
model.predict(img).then(predictions => {
    console.log('Predictions: ');
    console.log(predictions);
});

it throws the following error:

Uncaught (in promise) Error: The dict provided in model.execute(dict) has keys [...] that are not part of graph
at e.t.checkInputs (graph_executor.js:607)
at e.t.execute (graph_executor.js:193)
at e.t.execute (graph_model.js:338)
at e.t.predict (graph_model.js:291)
at predictImages (detector.php:39)

Am I using the wrong loading function, did the model loading process fail (even though it didn't throw any errors?) or is the inference function wrong? Thanks in advance for your support!

EDIT: After using @edkeveked's suggestion to convert the image to a tensor first using this code:

const tensor = tf.browser.fromPixels(img);

and running inference using this:

model.predict(tensor.expandDims(0));

I got this error message:

Uncaught (in promise) Error: This execution contains the node 'Preprocessor/map/while/Exit_2', which has the dynamic op 'Exit'. Please use model.executeAsync() instead. Alternatively, to avoid the dynamic ops, specify the inputs [Preprocessor/map/TensorArrayStack/TensorArrayGatherV3]
    at e.t.compile (graph_executor.js:162)
    at e.t.execute (graph_executor.js:212)
    at e.t.execute (graph_model.js:338)
    at e.t.predict (graph_model.js:291)
    at predictImages (detector.php:38)

After replacing model.predict() with model.executeAsync(), it returned a result that was not what I expected to get from an object detection model:

detector.php:40 (2)?[e, e]0: e?{kept: false, isDisposedInternal: false, shape: Array(3), dtype: "float32", size: 3834,?…}1: e?{kept: false, isDisposedInternal: false, shape: Array(4), dtype: "float32", size: 7668,?…}length: 2__proto__: Array(0)

This is my complete code so far (images added in HTML using PHP):

<script src="https://cdn.jsdelivr.net/npm/@tensorflow/[email protected]/dist/tf.min.js"></script>
    <!-- Load the coco-ssd model. -->
    <script src="https://cdn.jsdelivr.net/npm/@tensorflow-models/[email protected]"></script>

    <script>
    async function predictImages() { // async
        console.log("loading model");
        // Load the model.
        const model = await tf.loadGraphModel('model/model.json');
        console.log("model loaded.");
        
        // predict for all images
        for (let i = 0; i <= 4; i++) {
            const img = document.getElementById('img' + i); // check if image exists if (img
            if (img != null) {
                console.log("doc exists: " + 'img' + i);
                const tensor = tf.browser.fromPixels(img);
                model.executeAsync(tensor.expandDims(0)).then(predictions => {
                    console.log('Predictions: ');
                    console.log(predictions);
                });
             
            } else {
                break;
            }
        }
    }
    predictImages();
    </script>

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

model.predict expects a tensor but it is given an HTMLImageElement. First a tensor needs to be constructed from the HTMLImageElement.

const tensor = tf.browser.fromPixels(img)

And then the tensor can be used as the parameter to model.predict

model.predict(tensor) // returns a 3d

Last but not the least is to make sure that the tensor shape is the one expected by the model (3d or 4d). If the model expects a 4d, then it should rather be

model.predict(tensor.expandDims(0))

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...