Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
88 views
in Technique[技术] by (71.8m points)

python - Is there a way to figure out whether PyTorch model is on cpu or on the device?

I would like to figure out, whether the PyTorch model is on cpu or cuda in order to initialize some other variable as Torch.Tensor or Torch.cuda.Tensor depending on the model.

However, looking at the output of the dir() function I see only .cpu(), .cuda(), to() methods which put the model on device, GPU or other device, specified in to. For PyTorch tensor there is is_cuda attribute, but no analogue for the whole model.

Is there some way to deduce this for a model, or one needs to refer to a particular weight?

question from:https://stackoverflow.com/questions/65941179/is-there-a-way-to-figure-out-whether-pytorch-model-is-on-cpu-or-on-the-device

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

No, there is no such function for nn.Module, I believe this is because parameters could be on multiple devices at the same time.

If you're working with a single device, a workaround is to check the first parameter:

next(model.parameters()).is_cuda

As described here.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...