Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
372 views
in Technique[技术] by (71.8m points)

python - Cannot convert a symbolic Keras input/output to a numpy array

I am following this tutorial about creating a generative variational auto-encoder. However, I am using images with 3 channels (RGB) of dogs pulled from Kaggle. I am building my auto-encoder but I get the following error:

Cannot convert a symbolic Keras input/output to a numpy array. This error may indicate that you're trying to pass a symbolic value to a NumPy call, which is not supported. Or, you may be trying to pass Keras symbolic inputs/outputs to a TF API that does not register dispatching, preventing Keras from automatically converting the API call to a lambda layer in the Functional Model.

Here is the relevant code:

batch_size = 32
train_ds = tf.data.Dataset.from_tensor_slices(dog_images).batch(batch_size).prefetch(1)

...


# Build encoder

latent_dim = 32

input_data = keras.layers.Input(shape=(256, 256, 3))

encoder = keras.layers.Conv2D(256, 4, 2, "same")(input_data)

encoder = keras.layers.Conv2D(256, 4, 2, "same")(encoder)
encoder = keras.layers.LeakyReLU(0.2)(encoder)
...
encoder = keras.layers.Conv2D(latent_dim * 2, 4, 2, "same", use_bias=False)(encoder)

encoder = keras.layers.Flatten()(encoder)

encoder_mu = keras.layers.Dense(units=latent_dim, name="encoder_mu")(encoder)
encoder_log_variance = keras.layers.Dense(units=latent_dim, name="encoder_log_variance")(encoder)
encoder = Sampling()([encoder_mu, encoder_log_variance])

encoder_model = keras.Model(input_data, [encoder_mu, encoder_log_variance, encoder], name="encoder")
encoder_model.summary()

Here is an image of the summary. Something I find odd is that the shape of the input layer as an opening [ but no closing ].

enter image description here

Next, here is the decoder:

decoder_input = keras.layers.Input(shape=(32,))

decoder = keras.layers.Dense(4096)(decoder_input)
decoder = keras.layers.Reshape((8, 8, 64))(decoder)                          #(8, 8, 64)

decoder = keras.layers.Conv2DTranspose(512, 3, strides=2, padding="same")(decoder)      #(16, 16, 512)
decoder = keras.layers.BatchNormalization()(decoder)
decoder = keras.layers.ReLU()(decoder)
...
decoder = keras.layers.Conv2DTranspose(3, 3, strides=2, padding="same")(decoder)      #(256, 256, 3)
decoder = keras.layers.BatchNormalization()(decoder)
decoder_output = keras.layers.ReLU()(decoder)

decoder_model = keras.Model(decoder_input, decoder_output)
decoder_model.summary()

Here is the summary: enter image description here

I next build the auto-encoder and train it, but before it even trains an epoch (or, apparently, a batch) it throws the error.

_, _, encoded = encoder_model(input_data)
decoded = decoder_model(encoded)
autoencoder = keras.models.Model(input_data, decoded)
autoencoder.summary()

autoencoder.compile(loss=get_loss(encoder_mu, encoder_log_variance), optimizer='adam')
autoencoder.summary()

autoencoder.fit(train_ds, epochs=3)

As you can see, I am using a custom loss function which I've copied and pasted from the linked tutorial. Here it is:

def get_loss(distribution_mean, distribution_variance):
    
    def get_reconstruction_loss(y_true, y_pred):
        reconstruction_loss = keras.losses.mse(y_true, y_pred)
        reconstruction_loss_batch = tf.reduce_mean(reconstruction_loss)
        return reconstruction_loss_batch*256*256
    
    def get_kl_loss(distribution_mean, distribution_variance):
        kl_loss = 1 + distribution_variance - tf.square(distribution_mean) - tf.exp(distribution_variance)
        kl_loss_batch = tf.reduce_mean(kl_loss)
        return kl_loss_batch*(-0.5)
    
    def total_loss(y_true, y_pred):
        reconstruction_loss_batch = get_reconstruction_loss(y_true, y_pred)
        kl_loss_batch = get_kl_loss(distribution_mean, distribution_variance)
        return reconstruction_loss_batch + kl_loss_batch
    
    return total_loss

Any ideas on what could be causing this issue?

EDIT:

Here is the relevant NumPy parts of my code. I am getting the images as np arrays and also adding black space around them so they can all be size 256x256 without distorting aspect ratio:

import pathlib
data_dir_str = '/content/dogs/all-dogs/'
data_dir = pathlib.Path(data_dir_str)
dogs = list(data_dir.glob('*.jpg'))
PIL.Image.open(str(dogs[0]))

...

from PIL import Image, ImageOps

desired_size = 256

def resizeImage(im_pth):
  im = Image.open(im_pth)
  old_size = im.size  # old_size[0] is in (width, height) format

  ratio = float(desired_size) / max(old_size)
  new_size = tuple([int(x * ratio) for x in old_size])
  # use thumbnail() or resize() method to resize the input image

  # thumbnail is a in-place operation

  # im.thumbnail(new_size, Image.ANTIALIAS)

  im = im.resize(new_size, Image.ANTIALIAS)
  # create a new image and paste the resized on it

  new_im = Image.new("RGB", (desired_size, desired_size))
  new_im.paste(im, ((desired_size - new_size[0]) // 2,
                    (desired_size - new_size[1]) // 2))

  return np.asarray(new_im)
  
new_dogs = list(map(resizeImage, dogs[:10]))

...

def scalePixels(x):
  return x.astype('float32') / 255

new_dogs = scalePixels(np.array(new_dogs))

So I have tried using new_dogs in training like this:

autoencoder.fit(new_dogs, new_dogs, epochs=3)

And I have also tried using this TF dataset:

# turn array of [x1, x2, x3...] to [(x1, x1), (x2, x2)...]
f = lambda x: (x,x)
new_dogs_toup = f(new_dogs)

batch_size = 32

train_ds = tf.data.Dataset.from_tensor_slices(new_dogs_toup).batch(batch_size).prefetch(1)

Finally, I ran get_loss(encoder_mu, encoder_log_variance) and here is the output: enter image description here

question from:https://stackoverflow.com/questions/66058365/cannot-convert-a-symbolic-keras-input-output-to-a-numpy-array

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)
Waitting for answers

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...