Predictions in PyTorch - A Coding Perspective

In this post, I will describe the changes needed to make predictions while using different loss functions.

We assume that all outputs are calculated using:

outputs_test = self.model(inputs)

Types of Loss Functions

For a binary problem, you may use the BCELoss() or the BCEWithLogitsLoss function. The difference can be summarized as

BCEWithLogitsLoss() = BCELoss() + sigmoid

i.e. in your model, remove the sigmoid function as the last layer if you are implementing a classification problem and are using the BCEWithLogitsLoss() function.

Both require that the number of output neurons must be 1.

For multi-class problems, the number of output neurons is greater than 1, and we use a softmax as the activation and the CrossEntropyLoss function. Note that we can perform a binary classification using the softmax, but in this case, we need two output neurons.

Data Type

For binary problems, the targets must be of type torch.FloatTensor. For multi-class problems, the targets must be of type torch.LongTensor. (You might need the reshape(-1,1) method )

Also, the targets must be a 1D Tensor for BCELoss/BCEWithLogitsLoss but for the CrossEntropyLoss, it must be a 0D Tensor.

Making Predictions

To make predictions from the logits, each loss function requires a different method.

predictions = np.round(outputs_test.numpy()) # for bceloss

predictions = (outputs_test.numpy() > 0) # for bcewithlogitsloss

predictions = torch.argmax(outputs_test, 1) # cross-entropy loss


Author | MMG

Learning...