While looking at some pytorch code on pose estimation AlphaPose I noticed some unfamiliar syntax:
Basically, we define a Darknet
class which inherits nn.Module
properties like so: class Darknet(nn.Module)
This re-constructs the neural net from some config file and also defines functions to load pre-trained weights and a forward pass
Now, forward pass takes the following parameters:
def forward(self, x, CUDA)
I should note that in class definition forward is the only method that has a CUDA attribute (this will become important later on)
In the forward pass we get the predictions:
for i in range(number_of_modules):
x = self.module[i](x)
where module[i]
was constructed as:
module = nn.Sequential()
conv = nn.Conv2d(prev_fileters, filters, kernel_size, stride, pad, bias=bias)
module.add_module("conv_{0}".format(index), conv)
We then call invoke this model and (I presume) a forward method like so:
self.det_model = Darknet("yolo/cfg/yolov3-spp.cfg")
self.det_model.load_weights('models/yolo/yolov3-spp.weights')
self.det_model.cpu()
self.det_model.eval()
image = image.cpu()
prediction = self.det_model(img, CUDA = False)
I assume that the last line is the calling of the forward pass but why not use the .forward
? Is this a pytorch specific syntax or am I missing some basic python principles?
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…