OCDocker.OCScore.Analysis.Metrics.Ranking module¶
Core ranking metrics used across Analysis (ROC AUC, PR AUC, EF, BEDROC, etc.).
Usage:
from OCDocker.OCScore.Analysis.Metrics import Ranking as Rank
- OCDocker.OCScore.Analysis.Metrics.Ranking.bedroc(y_true, y_score, alpha=20.0)[source]¶
BEDROC per Truchon & Bayly (2007), ranking by descending score.
- Parameters:
y_true (np.ndarray) – True binary labels (0/1 or boolean).
y_score (np.ndarray) – Target scores, can either be probability estimates of the positive class, confidence values, or non-thresholded measure of decisions (as returned by a classifier).
alpha (float, optional) – Exponential weighting factor; higher = more early recognition. Default is 20.0.
- Returns:
BEDROC score (0.0 ~ 1.0, or NaN if no positives).
- Return type:
float
- OCDocker.OCScore.Analysis.Metrics.Ranking.enrichment_factor(y_true, y_score, fraction)[source]¶
EF@fraction (e.g., 0.01 for 1%). EF = hits_in_top_fraction / expected_hits_random.
- Parameters:
y_true (np.ndarray) – True binary labels (0/1 or boolean).
y_score (np.ndarray) – Target scores, can either be probability estimates of the positive class, confidence values, or non-thresholded measure of decisions (as returned by a classifier).
fraction (float) – Fraction of top-scoring samples to consider (0.0 ~ 1.0).
- Returns:
Enrichment factor (>= 0.0, or NaN if no positives).
- Return type:
float
- OCDocker.OCScore.Analysis.Metrics.Ranking.groupwise(y_true, y_score, groups)[source]¶
Compute macro/micro ROC/PR AUC across discrete groups.
- Parameters:
y_true (np.ndarray) – True binary labels (0/1 or boolean).
y_score (np.ndarray) – Target scores, can either be probability estimates of the positive class, confidence values, or non-thresholded measure of decisions (as returned by a classifier).
groups (Iterable) – Group labels for each sample (same length as y_true/y_score).
- Returns:
Dictionary with keys “roc_auc_macro”, “pr_auc_macro”, “roc_auc_micro”, “pr_auc_micro” and corresponding float values (or NaN if undefined).
- Return type:
dict[str, float]
- OCDocker.OCScore.Analysis.Metrics.Ranking.pr_auc(y_true, y_score)[source]¶
Compute average precision (area under PR curve).
- Parameters:
y_true (np.ndarray) – True binary labels (0/1 or boolean).
y_score (np.ndarray) – Target scores, can either be probability estimates of the positive class, confidence values, or non-thresholded measure of decisions (as returned by a classifier).
- Returns:
Average precision score (0.0 ~ 1.0).
- Return type:
float
- OCDocker.OCScore.Analysis.Metrics.Ranking.riep(y_true, y_score, k)[source]¶
Relative enrichment among the top-k versus total positives.
- Parameters:
y_true (np.ndarray) – True binary labels (0/1 or boolean).
y_score (np.ndarray) – Target scores, can either be probability estimates of the positive class, confidence values, or non-thresholded measure of decisions (as returned by a classifier).
k (int) – Number of top-scoring samples to consider.
- Returns:
RIEP score (0.0 ~ 1.0, or NaN if no positives).
- Return type:
float
- OCDocker.OCScore.Analysis.Metrics.Ranking.roc_auc(y_true, y_score)[source]¶
Compute ROC AUC with defensive validation.
- Parameters:
y_true (np.ndarray) – True binary labels (0/1 or boolean).
y_score (np.ndarray) – Target scores, can either be probability estimates of the positive class, confidence values, or non-thresholded measure of decisions (as returned by a classifier).
- Returns:
ROC AUC score (0.0 ~ 1.0).
- Return type:
float
- OCDocker.OCScore.Analysis.Metrics.Ranking.threshold_at_precision(y_true, y_score, target_precision)[source]¶
Find first threshold achieving at least the given precision.
- Parameters:
y_true (np.ndarray) – True binary labels (0/1 or boolean).
y_score (np.ndarray) – Target scores, can either be probability estimates of the positive class, confidence values, or non-thresholded measure of decisions (as returned by a classifier).
target_precision (float) – Desired precision level (0.0 ~ 1.0).
- Returns:
(threshold, precision, recall) at first point where precision >= target_precision, or (NaN, NaN, NaN) if target_precision not achievable.
- Return type:
tuple(float, float, float)
- OCDocker.OCScore.Analysis.Metrics.Ranking.top_fraction_precision(y_true, y_score, frac)[source]¶
Precision among the top fraction (e.g., 0.01 for top-1%).
- Parameters:
y_true (np.ndarray) – True binary labels (0/1 or boolean).
y_score (np.ndarray) – Target scores, can either be probability estimates of the positive class, confidence values, or non-thresholded measure of decisions (as returned by a classifier).
frac (float) – Fraction of top-scoring samples to consider (0.0 ~ 1.0).
- Returns:
Precision among top fraction (0.0 ~ 1.0).
- Return type:
float
- OCDocker.OCScore.Analysis.Metrics.Ranking.top_k_precision(y_true, y_score, k)[source]¶
Precision among the top-k scored samples (descending by score).
- Parameters:
y_true (np.ndarray) – True binary labels (0/1 or boolean).
y_score (np.ndarray) – Target scores, can either be probability estimates of the positive class, confidence values, or non-thresholded measure of decisions (as returned by a classifier).
k (int) – Number of top-scoring samples to consider.
- Returns:
Precision among top-k (0.0 ~ 1.0).
- Return type:
float