Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
323 views
in Technique[技术] by (71.8m points)

python - How do I access Tensor values (e.g. Metrics) which are updated within a tf.function?

I have been working on a model whose training loop uses a tf.function wrapper (I get OOM errors when running eagerly), and training seems to be running fine. However, I am not able to access the tensor values returned by my custom training function (below)

def train_step(inputs, target):
    with tf.GradientTape() as tape:
        predictions = model(inputs, training=True)
        curr_loss = lovasz_softmax_flat(predictions, target)

    gradients = tape.gradient(curr_loss, model.trainable_variables)
    opt.apply_gradients(zip(gradients, model.trainable_variables))
    
    # Need to access this value
    return curr_loss

A simplified version of my 'umbrella' training loop is as follows:

@tf.function
def train_loop():
for epoch in range(EPOCHS):
        for tr_file in train_files:

            tr_inputs = preprocess(tr_file)
            
            tr_loss = train_step(tr_inputs, target)
            print(tr_loss.numpy())
            

When I do try to print out the loss value, I end up with the following error:

AttributeError: 'Tensor' object has no attribute 'numpy'

I also tried using tf.print() as follows:

tf.print("Loss: ", tr_loss, output_stream=sys.stdout)

But nothing seems to appear on the terminal. Any suggestions?


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

You can't convert to Numpy array in graph mode. Just create a tf.metrics object outside of the function, and update it in the function.

mean_loss_values = tf.metrics.Mean()

def train_step(inputs, target):
    with tf.GradientTape() as tape:
        predictions = model(inputs, training=True)
        curr_loss = lovasz_softmax_flat(predictions, target)

    gradients = tape.gradient(curr_loss, model.trainable_variables)
    opt.apply_gradients(zip(gradients, model.trainable_variables))

    # look below
    mean_loss_values(curr_loss)
    # or mean_loss_values.update_state(curr_loss)
    
    # Need to access this value
    return curr_loss

Then later in your code:

mean_loss_values.result()

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...