There are two issues that are causing problems here:
The first issue is that the Session.run()
call only accepts a small number of types as the keys of the feed_dict
. In particular, lists of tensors are not supported as keys, so you have to put each tensor as a separate key.* One convenient way to do this is using a dictionary comprehension:
inputs = [tf.placeholder(...), ...]
data = [np.array(...), ...]
sess.run(y, feed_dict={i: d for i, d in zip(inputs, data)})
The second issue is that the 10 * [tf.placeholder(...)]
syntax in Python creates a list with ten elements, where each element is the same tensor object (i.e. has the same name
property, the same id
property, and is reference-identical if you compare two elements from the list using inputs[i] is inputs[j]
). This explains why, when you tried to create a dictionary using the list elements as keys, you ended up with a dictionary with a single element - because all of the list elements were identical.
To create 10 different placeholder tensors, as you intended, you should instead do the following:
inputs = [tf.placeholder(tf.float32, shape=(batch_size, input_size))
for _ in xrange(10)]
If you print the elements of this list, you'll see that each element is a tensor with a different name.
EDIT: * You can now pass tuples as the keys of a feed_dict
, because these may be used as dictionary keys.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…