So, I have been working on neural style transfer in Pytorch, but I'm stuck at the point where we have to run the input image through limited number of layers and minimize the style loss. Long story short, I want to find a way in Pytorch to evaluate the input at different layers of the architecture(I'm using vgg16). I have seen this problem solved very simply in keras, but I wanted to see if there is a similar way in pytorch as well or not.
from keras.applications.vgg16 import VGG16
model = VGG16()
model = Model(inputs=model.inputs, outputs=model.layers[1].output)
question from:
https://stackoverflow.com/questions/66051641/how-to-create-a-submodel-from-a-pretrained-model-in-pytorch-without-having-to-re 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…