spateo.segmentation.benchmark ============================= .. py:module:: spateo.segmentation.benchmark .. autoapi-nested-parse:: Functions to help segmentation benchmarking, specifically to compare two sets of segmentation labels. Functions --------- .. autoapisummary:: spateo.segmentation.benchmark.adjusted_rand_score spateo.segmentation.benchmark.iou spateo.segmentation.benchmark.average_precision spateo.segmentation.benchmark.classification_stats spateo.segmentation.benchmark.labeling_stats spateo.segmentation.benchmark.compare Module Contents --------------- .. py:function:: adjusted_rand_score(y_true: numpy.ndarray, y_pred: numpy.ndarray) -> float Compute the Adjusted Rand Score (ARS). Re-implementation to deal with over/underflow that is common with large datasets. :param y_true: True labels :param y_pred: Predicted labels :returns: Adjusted Rand Score .. py:function:: iou(labels1: numpy.ndarray, labels2: numpy.ndarray) -> scipy.sparse.csr_matrix Compute intersection-over-union (IOU). :param labels1: First set of labels :param labels2: Second set of labels :returns: Sparse matrix where the first axis corresponds to the first set of labels and vice-versa. .. py:function:: average_precision(iou: scipy.sparse.csr_matrix, tau: float = 0.5) -> float Compute average precision (AP). :param iou: IOU of true and predicted labels :param tau: IOU threshold to determine whether a prediction is correct :returns: Average precision .. py:function:: classification_stats(y_true: numpy.ndarray, y_pred: numpy.ndarray) -> Tuple[float, float, float, float, float, float, float] Calculate pixel classification statistics by considering labeled pixels as occupied (1) and unlabled pixels as unoccupied (0). :param y_true: True labels :param y_pred: Predicted labels :returns: * true negative rate * false positive rate * false negative rate * true positive rate (a.k.a. recall) * precision * accuracy * F1 score :rtype: A 7-element tuple containing the following values .. py:function:: labeling_stats(y_true: numpy.ndarray, y_pred: numpy.ndarray) -> Tuple[float, float, float, float] Calculate labeling (cluster) statistics. :param y_true: True labels :param y_pred: Predicted labels :returns: * adjusted rand score * homogeneity * completeness * v score :rtype: A 4-element tuple containing the following values .. py:function:: compare(adata: anndata.AnnData, true_layer: str, pred_layer: str, data_layer: str = SKM.X_LAYER, umi_pixels_only: bool = True, random_background: bool = True, ap_taus: Tuple[int, Ellipsis] = tuple(np.arange(0.5, 1, 0.05)), seed: Optional[int] = None) -> pandas.DataFrame Compute segmentation statistics. :param adata: Input Anndata :param true_layer: Layer containing true labels :param pred_layer: Layer containing predicted labels :param data_layer: Layer containing UMIs :param umi_pixels_only: Whether or not to only consider pixels that have at least one UMI captured (as determined by `data_layer`). :param random_background: Simulate random background by randomly permuting the `pred_layer` labels and computing the same statistics against `true_layer`. The returned DataFrame will have an additional column for these statistics. :param ap_taus: Tau thresholds to calculate average precision. Defaults to 0.05 increments starting at 0.5 and ending at (and including) 0.95. :param seed: Random seed. :returns: Pandas DataFrame containing classification and labeling statistics