我有一个神经网络,其产出 output
. 我想改变 output
在之前的损失和backpropogation发生。
这里是我的通用代码:
with torch.set_grad_enabled(training):
outputs = net(x_batch[:, 0], x_batch[:, 1]) # the prediction of the NN
# My issue is here:
outputs = transform_torch(outputs)
loss = my_loss(outputs, y_batch)
if training:
scheduler.step()
loss.backward()
optimizer.step()
我有一个变换的功能,我把我的输出的通过:
def transform_torch(predictions):
torch_dimensions = predictions.size()
torch_grad = predictions.grad_fn
cuda0 = torch.device('cuda:0')
new_tensor = torch.ones(torch_dimensions, dtype=torch.float64, device=cuda0, requires_grad=True)
for i in range(int(len(predictions))):
a = predictions[i]
# with torch.no_grad(): # Note: no training happens if this line is kept in
new_tensor[i] = torch.flip(torch.cumsum(torch.flip(a, dims = [0]), dim = 0), dims = [0])
return new_tensor
我的问题是,我得到一个错误,在下一个,最后一行:
RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation.
任何建议? 我已经尝试使用"有火炬。no_grad():"(注释),但是这个结果非常差的培训,我认为,梯度不backpropogate适当地改造后的功能。
谢谢!