Skip to main content

Losses

Loss functions are used to measure how well a model's predictions match the actual outcomes, providing a value that the model aims to minimize during the training process.

Loss functions can be found through static calls in the class NumPower\NeuralNetwork\Losses;.

Regression


Regression losses are used to evaluate the performance of regression models, which predict continuous values. These losses quantify the difference between the predicted values generated by the model and the actual values from the data. The goal during training is to minimize these losses, thereby improving the model's accuracy.

MeanSquaredError

public static function MeanSquaredError(int|float|array|\NDArray|Tensor $x,
int|float|array|\NDArray|Tensor $y,
?string $reduction = 'mean',
string $name = ''): Tensor

Calculates the Mean Squared Error (MSE) between two inputs, $x and $y. This function is a key metric for evaluating the performance of regression models. The MSE is computed by averaging the squared differences between the predicted and actual values.

An optional $reduction parameter can be specified to control how the final result is aggregated. By default, it is set to mean, which returns the average of all squared errors. Another common option is sum, which returns the total sum of all squared errors.

MeanAbsoluteError

public static function MeanAbsoluteError(int|float|array|\NDArray|Tensor $x,
int|float|array|\NDArray|Tensor $y,
?string $reduction = 'mean',
string $name = ''): Tensor

Calculates the Mean Absolute Error (MAE) between two inputs, $x and $y. This function is essential for assessing the accuracy of regression models by measuring the average magnitude of the errors in a set of predictions, without considering their direction. The MAE is computed by averaging the absolute differences between the predicted and actual values.

The $reduction parameter allows customization of how the final result is summarized. By default, it is set to mean, which returns the average of all absolute errors, but sum can also be specified to obtain the total sum of all absolute errors.

Probabilistic


Probabilistic losses are used in models that predict probability distributions over outcomes rather than single point estimates. These losses measure how well the predicted probability distribution aligns with the actual distribution of the data. The goal is to minimize these losses to improve the model's ability to predict accurate probabilities.

BinaryCrossEntropy

public static function BinaryCrossEntropy(int|float|array|\NDArray|Tensor $x,
int|float|array|\NDArray|Tensor $y,
?string $reduction = 'mean',
string $name = ''): Tensor

Computes the Binary Cross Entropy loss between the inputs $x (predictions) and $y (targets). This function is commonly used in binary classification tasks to measure the difference between probability distributions of predicted and actual class labels.

The $reduction parameter specifies how the final loss should be aggregated. The default is mean, which returns the average loss across all samples. Alternatively, sum can be specified to return the total sum of losses.