import torch
def forward(x):
return x * w
def loss(x, y):
y_pred = forward(x)
return (y_pred - y) ** 2
x_data = [1.0, 2.0, 3.0]
y_data = [2.0, 4.0, 6.0]
w = torch.Tensor([1.0])
w.requires_grad = True
print("predict (before training)", 4, forward(4).item())
for epoch in range(100):
for x, y in zip(x_data, y_data):
l = loss(x, y)
l.backward()
print('grad:', x, y, w.grad.item())
w.data = w.data - 0.01 * w.grad.data
w.grad.data.zero_()
print("progress:", epoch, l.item())
print("predict (after training)", 4, forward(4).item())
Linux distribution is Ubuntu 18.04
GPU: GeForce RTX 2080 SUPER
GPU driver: NVIDIA UNIX x86_64 Kernel Module 450.80.02
CUDA: Cuda compilation tools, release 11.0, V11.0.194
cudNN: 8.0.3
pytorch: 1.7.0, py3.6_cuda11.0.221_cudnn8.0.3_0
python: 3.6
code is very simple, but freezes the whole computer when I run it
screen, keyboard, mouse, not responding at all
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…