Velvet Star Monitor

Standout celebrity highlights with iconic style.

updates

What's the purpose of torch.autograd.Variable?

Writer Andrew Henderson

I load features and labels from my training dataset. Both of them are originally numpy arrays, but I change them to the torch tensor using torch.from _numpy(features.copy()) and torch.tensor(labels.astype(np.bool)).

And I notice that torch.autograd.Variable is something like placeholder in tensorflow.

When I train my network, first I tried

features = features.cuda()
labels = labels.cuda()
outputs = Config.MODEL(features)
loss = Config.LOSS(outputs, labels)

Then I tried

features = features.cuda()
labels = labels.cuda()
input_var = Variable(features)
target_var = Variable(labels)
outputs = Config.MODEL(input_var)
loss = Config.LOSS(outputs, target_var)

Both blocks succeed in activating training, but I worried that there might be trivial difference.

1 Answer

According to this question you no longer need variables to use Pytorch Autograd.

Thanks to @skytree, we can make this even more explizit: Variables have been deprecated, i.e. you're not supposed to use them anymore.

Autograd automatically supports Tensors with requires_grad set to True.

And more importantly

Variable(tensor) and Variable(tensor, requires_grad) still work as expected, but they return Tensors instead of Variables.

This means that if your features and labels are tensors already (which they seem to be in your example) your Variable(features) and Variable(labels) does only return a tensor again.

The original purpose of Variables was to be able to use automatic differentiation (Source):

Variables are just wrappers for the tensors so you can now easily auto compute the gradients.

2

Your Answer

Sign up or log in

Sign up using Google Sign up using Facebook Sign up using Email and Password

Post as a guest

By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy