Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
240 views
in Technique[技术] by (71.8m points)

python - Replacing placeholder for tensorflow v2

For my project, I need to convert a directed graph into a tensorflow implementation of the graph as if it was a neural network. In tensorflow version 1, I could just define all of my inputs as placeholders and then just generate the dataflow graph for the outputs using a breadthfirst search of the graph. Then I would just feed in my inputs using a feed_dict. However, in TensorFlow v2.0 they have decided to do away with placeholders entirely.

How would I make a tf.function for each graph that takes in a variable amount of inputs and returns a variable amount of outputs without using a placeholder?

I want to generate a tf.function like this that works for an arbitrary acyclic directed graph so that I can take advantage of tensorflow GPU support to run the graph feed forward a few thousand times in a row after I have generated it.


Edit for code example:

My graph is defined as a dictionary. Each key represents a node and has a corresponding value of another dictionary specifying incoming and outgoing links with weights.

{
    "A": {
        "incoming": [("B", 2), ("C", -1)],
        "outgoing": [("D", 3)]
    }
}

I have omitted the entries for B,C, and D for brevity. Here is how I would construct the code I want in tensorflow v1.0 where inputs is just a list of key values that are strictly inputs to the graph

def construct_graph(graph_dict, inputs, outputs):
    queue = inputs[:]
    make_dict = {}
    for key, val in graph_dict.items():
        if key in inputs:
            make_dict[key] = tf.placeholder(tf.float32, name=key)
        else:
            make_dict[key] = None
    # Breadth-First search of graph starting from inputs
    while len(queue) != 0:
        cur = graph_dict[queue[0]]
        for outg in cur["outgoing"]:
            if make_dict[outg[0]]: # If discovered node, do add/multiply operation
                make_dict[outg[0]] = tf.add(make_dict[outg[0]], tf.multiply(outg[1], make_dict[queue[0]]))
            else: # If undiscovered node, input is just coming in multiplied and add outgoing to queue
                make_dict[outg[0]] = tf.multiply(make_dict[queue[0]], outg[1])
                for outgo in graph_dict[outg[0]]["outgoing"]:
                    queue.append(outgo[0])
        queue.pop(0)
    # Returns one data graph for each output
    return [make_dict[x] for x in outputs]

I would then be able to run the outputs many times as they are simply graphs with placeholders that I would provide a feed_dict for.

Obviously, this is not the intended way in TensorFlow v2.0 as they seem to strongly discourage the use of placeholders in this new version.

The point is that I only have to do this preprocessing for a graph once, as it returns a datagraph which is independent of the graph_dict definition.

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

Make your code work with TF 2.0

Below is a sample code which you can use with TF 2.0. It relies on the compatibility API that is accessible as tensorflow.compat.v1, and requires to disable v2 behaviors. I don't know if it behaves as you expected. If not, then provide us more explanation of what you try to achieve.

import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()

@tf.function
def construct_graph(graph_dict, inputs, outputs):
    queue = inputs[:]
    make_dict = {}
    for key, val in graph_dict.items():
        if key in inputs:
            make_dict[key] = tf.placeholder(tf.float32, name=key)
        else:
            make_dict[key] = None
    # Breadth-First search of graph starting from inputs
    while len(queue) != 0:
        cur = graph_dict[queue[0]]
        for outg in cur["outgoing"]:
            if make_dict[outg[0]]: # If discovered node, do add/multiply operation
                make_dict[outg[0]] = tf.add(make_dict[outg[0]], tf.multiply(outg[1], make_dict[queue[0]]))
            else: # If undiscovered node, input is just coming in multiplied and add outgoing to queue
                make_dict[outg[0]] = tf.multiply(make_dict[queue[0]], outg[1])
                for outgo in graph_dict[outg[0]]["outgoing"]:
                    queue.append(outgo[0])
        queue.pop(0)
    # Returns one data graph for each output
    return [make_dict[x] for x in outputs]

def main():
    graph_def = {
        "B": {
            "incoming": [],
            "outgoing": [("A", 1.0)]
        },
        "C": {
            "incoming": [],
            "outgoing": [("A", 1.0)]
        },
        "A": {
            "incoming": [("B", 2.0), ("C", -1.0)],
            "outgoing": [("D", 3.0)]
        },
        "D": {
            "incoming": [("A", 2.0)],
            "outgoing": []
        }
    }
    outputs = construct_graph(graph_def, ["B", "C"], ["A"])
    print(outputs)

if __name__ == "__main__":
    main()
[<tf.Tensor 'PartitionedCall:0' shape=<unknown> dtype=float32>]

?Migrate your code to TF 2.0

While the above snippet is valid, it is still tied to TF 1.0. To migrate it to TF 2.0 you have to refactor a little bit your code.

Instead of returning a list of tensors, which were callables with TF 1.0, I advise you to return a list of keras.layers.Model.

Below is a working example:

import tensorflow as tf

def construct_graph(graph_dict, inputs, outputs):
    queue = inputs[:]
    make_dict = {}
    for key, val in graph_dict.items():
        if key in inputs:
            # Use keras.Input instead of placeholders
            make_dict[key] = tf.keras.Input(name=key, shape=(), dtype=tf.dtypes.float32)
        else:
            make_dict[key] = None
    # Breadth-First search of graph starting from inputs
    while len(queue) != 0:
        cur = graph_dict[queue[0]]
        for outg in cur["outgoing"]:
            if make_dict[outg[0]] is not None: # If discovered node, do add/multiply operation
                make_dict[outg[0]] = tf.keras.layers.add([
                    make_dict[outg[0]],
                    tf.keras.layers.multiply(
                        [[outg[1]], make_dict[queue[0]]],
                    )],
                )
            else: # If undiscovered node, input is just coming in multiplied and add outgoing to queue
                make_dict[outg[0]] = tf.keras.layers.multiply(
                    [make_dict[queue[0]], [outg[1]]]
                )
                for outgo in graph_dict[outg[0]]["outgoing"]:
                    queue.append(outgo[0])
        queue.pop(0)
    # Returns one data graph for each output
    model_inputs = [make_dict[key] for key in inputs]
    model_outputs = [make_dict[key] for key in outputs]
    return [tf.keras.Model(inputs=model_inputs, outputs=o) for o in model_outputs]

def main():
    graph_def = {
        "B": {
            "incoming": [],
            "outgoing": [("A", 1.0)]
        },
        "C": {
            "incoming": [],
            "outgoing": [("A", 1.0)]
        },
        "A": {
            "incoming": [("B", 2.0), ("C", -1.0)],
            "outgoing": [("D", 3.0)]
        },
        "D": {
            "incoming": [("A", 2.0)],
            "outgoing": []
        }
    }
    outputs = construct_graph(graph_def, ["B", "C"], ["A"])
    print("Builded models:", outputs)
    for o in outputs:
        o.summary(120)
        print("Output:", o((1.0, 1.0)))

if __name__ == "__main__":
    main()

What to notice here?

  • Change from placeholder to keras.Input, requiring to set the shape of the input.
  • Use keras.layers.[add|multiply] for computation. This is probably not required, but stick to one interface. However, it requires to wrap factors inside a list (to handle batching)
  • Build keras.Model to return
  • Call your model with a tuple of values (not a dictionary anymore)

Here is the output of the code.

Builded models: [<tensorflow.python.keras.engine.training.Model object at 0x7fa0b49f0f50>]
Model: "model"
________________________________________________________________________________________________________________________
Layer (type)                           Output Shape               Param #       Connected to                            
========================================================================================================================
B (InputLayer)                         [(None,)]                  0                                                     
________________________________________________________________________________________________________________________
C (InputLayer)                         [(None,)]                  0                                                     
________________________________________________________________________________________________________________________
tf_op_layer_mul (TensorFlowOpLayer)    [(None,)]                  0             B[0][0]                                 
________________________________________________________________________________________________________________________
tf_op_layer_mul_1 (TensorFlowOpLayer)  [(None,)]                  0             C[0][0]                                 
________________________________________________________________________________________________________________________
add (Add)                              (None,)                    0             tf_op_layer_mul[0][0]                   
                                                                                tf_op_layer_mul_1[0][0]                 
========================================================================================================================
Total params: 0
Trainable params: 0
Non-trainable params: 0
________________________________________________________________________________________________________________________
Output: tf.Tensor([2.], shape=(1,), dtype=float32)

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...