Variable
doesn't do anything and has been deprecated since pytorch 0.4.0. Its functionality was merged with the torch.Tensor
class. Back then the volatile
flag was used to disable the construction of the computation graph for any operation which the volatile variable was involved in. Newer pytorch has changed this behavior to instead use with torch.no_grad():
to disable construction of the computation graph for anything in the body of the with
statement.
What you should change will depend on your reason for using volatile in the first place. No matter what though you probably want to use
images = images.cuda()
targets = [ann.cuda() for ann in targets]
During training you would use something like the following so that the computation graph is created (assuming standard variable names for model, criterion, and optimizer).
output = model(images)
loss = criterion(images, targets)
optimizer.zero_grad()
loss.backward()
optimizer.step()
Since you don't need to perform backpropagation during evaluation you would use with torch.no_grad():
to disable the creation of the computation graph which reduces the memory footprint and speeds up computation.
with torch.no_grad():
output = model(images)
loss = criterion(images, targets)
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…