Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
859 views
in Technique[技术] by (71.8m points)

python - RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu on transormer


def nopeak_mask(size, opt):
    np_mask = np.triu(np.ones((1, size, size)),k=1).astype('uint8')
    np_mask =Variable(torch.from_numpy(np_mask) == 0)
    if opt.device == 0:
      np_mask = np_mask.cuda()
    return np_mask

def create_masks(src, trg, opt):

    src_mask = (src != opt.src_pad).unsqueeze(-2)

    if trg is not None:
        trg_mask = (trg != opt.trg_pad).unsqueeze(-2)
        size = trg.size(1) # get seq_len for matrix
        np_mask = nopeak_mask(size, opt)
        #if trg.is_cuda:
        #    np_mask.cuda()
        print(np_mask)
        print(trg_mask)
        trg_mask = trg_mask & np_mask

    else:
        trg_mask = None
    return src_mask, trg_mask

This code has a problem in this line trg_mask = trg_mask & np_mask I check two tensor I am sure in different device the source code can be found here.

question from:https://stackoverflow.com/questions/65858937/runtimeerror-expected-all-tensors-to-be-on-the-same-device-but-found-at-least

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

It sounds like trg_mask and np_mask are tensors stored on two different devices (cpu and cuda:0). If you want to perform an operation on them, they will need to both be on either cpu or they will both need to be on cuda:0.

Based on the information given I'm not sure which variable is on which device, but if you want to move a variable from cuda:0 to cpu you can do this.

var = var.detach().cpu().numpy()

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...