Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
451 views
in Technique[技术] by (71.8m points)

python - TensorFlow: how to log GPU memory (VRAM) utilization?

TensorFlow always (pre-)allocates all free memory (VRAM) on my graphics card, which is ok since I want my simulations to run as fast as possible on my workstation.

However, I would like to log how much memory (in sum) TensorFlow really uses. Additionally it would be really nice, if I could also log how much memory single tensors use.

This information is important to measure and compare the memory size that different ML/AI architectures need.

Any tips?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

Update, can use TensorFlow ops to query allocator:

# maximum across all sessions and .run calls so far
sess.run(tf.contrib.memory_stats.MaxBytesInUse())
# current usage
sess.run(tf.contrib.memory_stats.BytesInUse())

Also you can get detailed information about session.run call including all memory being allocations during run call by looking at RunMetadata. IE something like this

run_metadata = tf.RunMetadata()
sess.run(c, options=tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE, output_partition_graphs=True), run_metadata=run_metadata)

Here's an end-to-end example -- take column vector, row vector and add them to get a matrix of additions:

import tensorflow as tf

no_opt = tf.OptimizerOptions(opt_level=tf.OptimizerOptions.L0,
                             do_common_subexpression_elimination=False,
                             do_function_inlining=False,
                             do_constant_folding=False)
config = tf.ConfigProto(graph_options=tf.GraphOptions(optimizer_options=no_opt),
                        log_device_placement=True, allow_soft_placement=False,
                        device_count={"CPU": 3},
                        inter_op_parallelism_threads=3,
                        intra_op_parallelism_threads=1)
sess = tf.Session(config=config)

with tf.device("cpu:0"):
    a = tf.ones((13, 1))
with tf.device("cpu:1"):
    b = tf.ones((1, 13))
with tf.device("cpu:2"):
    c = a+b

sess = tf.Session(config=config)
run_metadata = tf.RunMetadata()
sess.run(c, options=tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE, output_partition_graphs=True), run_metadata=run_metadata)
with open("/tmp/run2.txt", "w") as out:
  out.write(str(run_metadata))

If you open run.txt you'll see messages like this:

  node_name: "ones"

      allocation_description {
        requested_bytes: 52
        allocator_name: "cpu"
        ptr: 4322108320
      }
  ....

  node_name: "ones_1"

      allocation_description {
        requested_bytes: 52
        allocator_name: "cpu"
        ptr: 4322092992
      }
  ...
  node_name: "add"
      allocation_description {
        requested_bytes: 676
        allocator_name: "cpu"
        ptr: 4492163840

So here you can see that a and b allocated 52 bytes each (13*4), and the result allocated 676 bytes.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

1.4m articles

1.4m replys

5 comments

57.0k users

...