OCDocker.OCScore.DNN.future.losses module

Loss functions for the future DNN pipeline (ranking, contrastive, weighting).

class OCDocker.OCScore.DNN.future.losses.UncertaintyWeighting(*args, **kwargs)[source]

Bases: Module

Uncertainty-based loss balancing for multi-task objectives.

Parameters:
  • task_names (Iterable[str]) – Names of tasks to balance. The order defines parameter indexing.

  • init_log_var (float, optional) – Initial value for log-variance parameters, by default 0.0.

Examples

>>> import torch
>>> from OCDocker.OCScore.DNN.future.losses import UncertaintyWeighting
>>> task_names = ["regression", "classification"]
>>> model = UncertaintyWeighting(task_names)
>>> losses = {
...     "regression": torch.tensor(2.0),
...     "classification": torch.tensor(1.0)
... }
>>> total_loss, weights = model(losses)
>>> print(total_loss)
tensor(3.6931, grad_fn=<AddBackward0>)
>>> print(weights)
{'regression': 0.6065306663513184, 'classification': 1.0}
__init__(task_names, init_log_var=0.0)[source]

Initialize uncertainty weighting module.

Parameters:
  • task_names (Iterable[str]) – Task names to balance.

  • init_log_var (float, optional) – Initial log-variance value, by default 0.0.

Return type:

None

forward(losses)[source]

Combine losses using learned uncertainty weights.

Parameters:

losses (Dict[str, torch.Tensor]) – Dictionary of task losses.

Returns:

Combined loss and current weights per task.

Return type:

tuple[torch.Tensor, Dict[str, float]]

OCDocker.OCScore.DNN.future.losses.focal_binary_loss(logits, targets, alpha=0.25, gamma=2.0, reduction='mean')[source]

Binary focal loss with logits.

Parameters:
  • logits (torch.Tensor) – Raw model logits.

  • targets (torch.Tensor) – Binary targets (0/1).

  • alpha (float, optional) – Balancing factor, by default 0.25.

  • gamma (float, optional) – Focusing parameter, by default 2.0.

  • reduction (str, optional) – Reduction mode: ‘mean’, ‘sum’, or ‘none’. Default is ‘mean’.

Returns:

Focal loss value.

Return type:

torch.Tensor

OCDocker.OCScore.DNN.future.losses.lambda_rank_ndcg_loss(scores, labels, k_fractions=(0.01, 0.05, 0.1, 0.25, 0.5, 0.75), weights=(0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666, 0.16666666666666666))[source]

LambdaRank-style loss with NDCG@k weighting (top-heavy).

Parameters:
  • scores (torch.Tensor) – Predicted scores (N,).

  • labels (torch.Tensor) – Binary labels (N,).

  • k_fractions (Sequence[float], optional) – Fractions of the ranked list to emphasize (e.g., 0.01, 0.05, 0.10, 0.25, 0.50, 0.75).

  • weights (Sequence[float], optional) – Weights for each k in k_fractions.

Returns:

LambdaRank NDCG loss.

Return type:

torch.Tensor

OCDocker.OCScore.DNN.future.losses.supervised_contrastive_loss(embeddings, labels, temperature=0.1)[source]

Supervised contrastive loss (SupCon) with L2-normalized embeddings.

Parameters:
  • embeddings (torch.Tensor) – Embeddings of shape (N, D).

  • labels (torch.Tensor) – Binary or multiclass labels of shape (N,).

  • temperature (float, optional) – Softmax temperature, by default 0.1.

Returns:

SupCon loss value.

Return type:

torch.Tensor