Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
578 views
in Technique[技术] by (71.8m points)

python - PyTorch most efficient Jacobian/Hessian calculation

I am looking for the most efficient way to get the Jacobian of a function through Pytorch and have so far come up with the following solutions:

# Setup
def func(X):
    return torch.stack((X.pow(2).sum(1),
                        X.pow(3).sum(1),
                        X.pow(4).sum(1)),1)  

X = Variable(torch.ones(1,int(1e5))*2.00094, requires_grad=True).cuda()
# Solution 1:
t = time()
Y = func(X)
J = torch.zeros(3, int(1e5))

for i in range(3):
    J[i] = grad(Y[0][i], X, create_graph=True, retain_graph=True, allow_unused=True)[0]

print(time()-t)
>>> Output: 0.002 s
# Solution 2:
def Jacobian(f,X):
    X_batch = Variable(X.repeat(3,1), requires_grad=True)
    f(X_batch).backward(torch.eye(3).cuda(), retain_graph=True)
    return X_batch.grad

t = time()
J2 = Jacobian(func,X)
print(time()-t)
>>> Output: 0.001 s

Since there seem to be not a big difference between using a loop in the first solution than the second one, I wanted to ask if there might still be be a faster way to calculate a Jacobian in pytorch.

My other question is then also about what might be the most efficient way to calculate the Hessian.

Finally, does anyone know if something like this can be done easier or more efficient in TensorFlow?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

The most efficient method is likely to use PyTorch's own inbuilt functions:

torch.autograd.functional.jacobian(func, x)
torch.autograd.functional.hessian(func, x)

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...