spateo.alignment.methods.deprecated_utils ========================================= .. py:module:: spateo.alignment.methods.deprecated_utils Attributes ---------- .. autoapisummary:: spateo.alignment.methods.deprecated_utils.intersect_lsts spateo.alignment.methods.deprecated_utils.to_dense_matrix spateo.alignment.methods.deprecated_utils.extract_data_matrix spateo.alignment.methods.deprecated_utils.nx_torch spateo.alignment.methods.deprecated_utils._cat spateo.alignment.methods.deprecated_utils._unique spateo.alignment.methods.deprecated_utils._var spateo.alignment.methods.deprecated_utils._data spateo.alignment.methods.deprecated_utils._unsqueeze spateo.alignment.methods.deprecated_utils._mul spateo.alignment.methods.deprecated_utils._power spateo.alignment.methods.deprecated_utils._psi spateo.alignment.methods.deprecated_utils._pinv spateo.alignment.methods.deprecated_utils._dot spateo.alignment.methods.deprecated_utils._identity spateo.alignment.methods.deprecated_utils._linalg spateo.alignment.methods.deprecated_utils._prod spateo.alignment.methods.deprecated_utils._pi spateo.alignment.methods.deprecated_utils._chunk spateo.alignment.methods.deprecated_utils._randperm spateo.alignment.methods.deprecated_utils._roll spateo.alignment.methods.deprecated_utils._choice spateo.alignment.methods.deprecated_utils._topk spateo.alignment.methods.deprecated_utils._dstack spateo.alignment.methods.deprecated_utils._vstack spateo.alignment.methods.deprecated_utils._hstack spateo.alignment.methods.deprecated_utils._split Functions --------- .. autoapisummary:: spateo.alignment.methods.deprecated_utils.check_backend spateo.alignment.methods.deprecated_utils.check_spatial_coords spateo.alignment.methods.deprecated_utils.check_exp spateo.alignment.methods.deprecated_utils.check_obs spateo.alignment.methods.deprecated_utils.check_rep_layer spateo.alignment.methods.deprecated_utils.check_label_transfer_dict spateo.alignment.methods.deprecated_utils.check_label_transfer spateo.alignment.methods.deprecated_utils.generate_label_transfer_dict spateo.alignment.methods.deprecated_utils.get_rep spateo.alignment.methods.deprecated_utils.filter_common_genes spateo.alignment.methods.deprecated_utils.normalize_coords spateo.alignment.methods.deprecated_utils.normalize_exps spateo.alignment.methods.deprecated_utils.align_preprocess spateo.alignment.methods.deprecated_utils.guidance_pair_preprocess spateo.alignment.methods.deprecated_utils._kl_distance_backend spateo.alignment.methods.deprecated_utils._cosine_distance_backend spateo.alignment.methods.deprecated_utils._euc_distance_backend spateo.alignment.methods.deprecated_utils._label_distance_backend spateo.alignment.methods.deprecated_utils._correlation_distance_backend spateo.alignment.methods.deprecated_utils._jaccard_distance_backend spateo.alignment.methods.deprecated_utils._chebyshev_distance_backend spateo.alignment.methods.deprecated_utils._canberra_distance_backend spateo.alignment.methods.deprecated_utils._braycurtis_distance_backend spateo.alignment.methods.deprecated_utils._hamming_distance_backend spateo.alignment.methods.deprecated_utils._minkowski_distance_backend spateo.alignment.methods.deprecated_utils.calc_distance spateo.alignment.methods.deprecated_utils.calc_probability spateo.alignment.methods.deprecated_utils.get_P_core spateo.alignment.methods.deprecated_utils.get_P spateo.alignment.methods.deprecated_utils.get_P_sparse spateo.alignment.methods.deprecated_utils.update_Sp spateo.alignment.methods.deprecated_utils.update_gamma spateo.alignment.methods.deprecated_utils.update_alpha spateo.alignment.methods.deprecated_utils.update_nonrigid spateo.alignment.methods.deprecated_utils.update_rigid spateo.alignment.methods.deprecated_utils.update_sigma2 spateo.alignment.methods.deprecated_utils.update_assignment_P spateo.alignment.methods.deprecated_utils.con_K spateo.alignment.methods.deprecated_utils.con_K_geodist spateo.alignment.methods.deprecated_utils.get_kernel spateo.alignment.methods.deprecated_utils.kl_divergence_backend spateo.alignment.methods.deprecated_utils.kl_distance spateo.alignment.methods.deprecated_utils.calc_exp_dissimilarity spateo.alignment.methods.deprecated_utils.cal_dist spateo.alignment.methods.deprecated_utils.cal_dot spateo.alignment.methods.deprecated_utils.get_optimal_R spateo.alignment.methods.deprecated_utils._cal_cosine_similarity spateo.alignment.methods.deprecated_utils._cos_similarity spateo.alignment.methods.deprecated_utils._dist spateo.alignment.methods.deprecated_utils.coarse_rigid_alignment spateo.alignment.methods.deprecated_utils.coarse_rigid_alignment spateo.alignment.methods.deprecated_utils.inlier_from_NN spateo.alignment.methods.deprecated_utils.coarse_rigid_alignment_debug spateo.alignment.methods.deprecated_utils.inlier_from_NN_debug spateo.alignment.methods.deprecated_utils.voxel_data spateo.alignment.methods.deprecated_utils._init_guess_sigma2 spateo.alignment.methods.deprecated_utils._init_probability_parameters spateo.alignment.methods.deprecated_utils._get_anneling_factor spateo.alignment.methods.deprecated_utils._init_guess_beta2 spateo.alignment.methods.deprecated_utils._dense_to_sparse spateo.alignment.methods.deprecated_utils.empty_cache spateo.alignment.methods.deprecated_utils.torch_like_split Module Contents --------------- .. py:data:: intersect_lsts .. py:data:: to_dense_matrix .. py:data:: extract_data_matrix .. py:function:: check_backend(device: str = 'cpu', dtype: str = 'float32', verbose: bool = True) Check the proper backend for the device. :param device: Equipment used to run the program. You can also set the specified GPU for running. E.g.: '0'. :param dtype: The floating-point number type. Only float32 and float64. :param verbose: If ``True``, print progress updates. :returns: The proper backend. type_as: The type_as.device is the device used to run the program and the type_as.dtype is the floating-point number type. :rtype: backend .. py:function:: check_spatial_coords(sample: anndata.AnnData, spatial_key: str = 'spatial') -> numpy.ndarray Check and return the spatial coordinate information from an AnnData object. :param sample: An AnnData object containing the sample data. :type sample: AnnData :param spatial_key: The key in `.obsm` that corresponds to the raw spatial coordinates. Defaults to "spatial". :type spatial_key: str, optional :returns: The spatial coordinates. :rtype: np.ndarray :raises KeyError: If the specified spatial_key is not found in `sample.obsm`. .. py:function:: check_exp(sample: anndata.AnnData, layer: str = 'X') -> numpy.ndarray Check expression matrix. :param sample: An AnnData object containing the sample data. :type sample: AnnData :param layer: The key in `.layers` that corresponds to the expression matrix. Defaults to "X". :type layer: str, optional :returns: The expression matrix. :raises KeyError: If the specified layer is not found in `sample.layers`. .. py:function:: check_obs(rep_layer: List[str], rep_field: List[str]) -> Optional[str] Check that the number of occurrences of 'obs' in the list of representation fields is no more than one. :param rep_layer: A list of representations to check. :type rep_layer: List[str] :param rep_field: A list of representation types corresponding to the representations in `rep_layer`. :type rep_field: List[str] :returns: The representation key if 'obs' occurs exactly once, otherwise None. :rtype: Optional[str] :raises ValueError: If 'obs' occurs more than once in the list. .. py:function:: check_rep_layer(samples: List[anndata.AnnData], rep_layer: Union[str, List[str]] = 'X', rep_field: Union[str, List[str]] = 'layer') -> bool Check if specified representations exist in the `.layers`, `.obsm`, or `.obs` attributes of AnnData objects. :param samples: A list of AnnData objects containing the data samples. :type samples: List[AnnData] :param rep_layer: The representation layer(s) to check. Defaults to "X". :type rep_layer: Union[str, List[str]], optional :param rep_field: The field(s) indicating the type of representation. Acceptable values are "layer", "obsm", and "obs". Defaults to "layer". :type rep_field: Union[str, List[str]], optional :returns: True if all specified representations exist in the corresponding attributes of all AnnData objects, False otherwise. :rtype: bool :raises ValueError: If the specified representation is not found in the specified attribute or if the attribute type is invalid. .. py:function:: check_label_transfer_dict(catA: List[str], catB: List[str], label_transfer_dict: Dict[str, Dict[str, float]]) Check the label transfer dictionary for consistency with given categories. :param catA: List of category labels from the first dataset. :type catA: List[str] :param catB: List of category labels from the second dataset. :type catB: List[str] :param label_transfer_dict: Dictionary defining the transfer probabilities between categories. :type label_transfer_dict: Dict[str, Dict[str, float]] :raises KeyError: If a category from `catA` is not found in `label_transfer_dict`. :raises KeyError: If a category from `catB` is not found in the nested dictionary of `label_transfer_dict`. .. py:function:: check_label_transfer(nx: Union[ot.backend.TorchBackend, ot.backend.NumpyBackend], type_as: Union[torch.Tensor, numpy.ndarray], samples: List[anndata.AnnData], obs_key: str, label_transfer_dict: Optional[List[Dict[str, Dict[str, float]]]] = None) -> List[Union[numpy.ndarray, torch.Tensor]] Check and generate label transfer matrices for the given samples. :param nx: Backend module (e.g., numpy or torch). :type nx: module :param type_as: Type to which the output should be cast. :type type_as: type :param samples: List of AnnData objects containing the samples. :type samples: List[AnnData] :param obs_key: The key in `.obs` that corresponds to the labels. :type obs_key: str :param label_transfer_dict: List of dictionaries defining the label transfer cost between categories of each pair of samples. Defaults to None. :type label_transfer_dict: Optional[List[Dict[str, Dict[str, float]]]], optional :returns: List of label transfer matrices, each as either a NumPy array or torch Tensor. :rtype: List[Union[np.ndarray, torch.Tensor]] :raises ValueError: If the length of `label_transfer_dict` does not match `len(samples) - 1`. .. py:function:: generate_label_transfer_dict(cat1: List[str], cat2: List[str], positive_pairs: Optional[List[Dict[str, Union[List[str], float]]]] = None, negative_pairs: Optional[List[Dict[str, Union[List[str], float]]]] = None, default_positve_value: float = 10.0) -> Dict[str, Dict[str, float]] Generate a label transfer dictionary with normalized values. :param cat1: List of categories from the first dataset. :type cat1: List[str] :param cat2: List of categories from the second dataset. :type cat2: List[str] :param positive_pairs: List of positive pairs with transfer values. Each dictionary should have 'left', 'right', and 'value' keys. Defaults to None. :type positive_pairs: Optional[List[Dict[str, Union[List[str], float]]]], optional :param negative_pairs: List of negative pairs with transfer values. Each dictionary should have 'left', 'right', and 'value' keys. Defaults to None. :type negative_pairs: Optional[List[Dict[str, Union[List[str], float]]]], optional :param default_positive_value: Default value for positive pairs if none are provided. Defaults to 10.0. :type default_positive_value: float, optional :returns: A normalized label transfer dictionary. :rtype: Dict[str, Dict[str, float]] .. py:function:: get_rep(nx: Union[ot.backend.TorchBackend, ot.backend.NumpyBackend], type_as: Union[torch.Tensor, numpy.ndarray], sample: anndata.AnnData, rep: str = 'X', rep_field: str = 'layer', genes: Optional[Union[list, numpy.ndarray]] = None) -> numpy.ndarray Get the specified representation from the AnnData object. :param nx: Backend module (e.g., numpy or torch). :type nx: module :param type_as: Type to which the output should be cast. :type type_as: type :param sample: The AnnData object containing the sample data. :type sample: AnnData :param rep: The name of the representation to retrieve. Defaults to "X". :type rep: str, optional :param rep_field: The type of representation. Acceptable values are "layer", "obs" and "obsm". Defaults to "layer". :type rep_field: str, optional :param genes: List of genes to filter if `rep_field` is "layer". Defaults to None. :type genes: Optional[Union[list, np.ndarray]], optional :returns: The requested representation from the AnnData object, cast to the specified type. :rtype: Union[np.ndarray, torch.Tensor] :raises ValueError: If `rep_field` is not one of the expected values. :raises KeyError: If the specified representation is not found in the AnnData object. .. py:function:: filter_common_genes(*genes, verbose: bool = True) -> list Filters for the intersection of genes between all samples. :param genes: List of genes. :param verbose: If ``True``, print progress updates. .. py:function:: normalize_coords(nx: Union[ot.backend.TorchBackend, ot.backend.NumpyBackend], coords: List[Union[numpy.ndarray, torch.Tensor]], verbose: bool = True, separate_scale: bool = True, separate_mean: bool = True) -> Tuple[List[Union[numpy.ndarray, torch.Tensor]], List[Union[numpy.ndarray, torch.Tensor]], List[Union[numpy.ndarray, torch.Tensor]]] Normalize the spatial coordinate. :param coords: Spatial coordinates of the samples. Each element in the list can be a numpy array or a torch tensor. :type coords: List[Union[np.ndarray, torch.Tensor]] :param nx: The backend to use for computations. Default is `ot.backend.NumpyBackend`. :type nx: Union[ot.backend.TorchBackend, ot.backend.NumpyBackend], optional :param verbose: If `True`, print progress updates. Default is `True`. :type verbose: bool, optional :param separate_scale: If `True`, normalize each coordinate axis independently. When doing the global refinement, this weill be set to False. Default is `True`. :type separate_scale: bool, optional :param separate_mean: If `True`, normalize each coordinate axis to have zero mean independently. When doing the global refinement, this weill be set to False. Default is `True`. :type separate_mean: bool, optional :returns: A tuple containing: - coords: List of normalized spatial coordinates. - normalize_scales: List of normalization scale factors applied to each coordinate axis. - normalize_means: List of mean values used for normalization of each coordinate axis. :rtype: Tuple[List[Union[np.ndarray, torch.Tensor]], List[Union[np.ndarray, torch.Tensor]], List[Union[np.ndarray, torch.Tensor]]] .. py:function:: normalize_exps(nx: Union[ot.backend.TorchBackend, ot.backend.NumpyBackend], exp_layers: List[List[Union[numpy.ndarray, torch.Tensor]]], rep_field: Union[str, List[str]] = 'layer', verbose: bool = True) -> List[List[Union[numpy.ndarray, torch.Tensor]]] Normalize the gene expression matrices. :param nx: The backend to use for computations. Defaults to `ot.backend.NumpyBackend`. :type nx: Union[ot.backend.TorchBackend, ot.backend.NumpyBackend], optional :param exp_layers: Gene expression and optionally the representation matrices of the samples. Each element in the list can be a numpy array or a torch tensor. :type exp_layers: List[List[Union[np.ndarray, torch.Tensor]]] :param rep_field: Field(s) indicating the type of representation. If 'layer', normalization can be applied. Defaults to "layer". :type rep_field: Union[str, List[str]], optional :param verbose: If `True`, print progress updates. Default is `True`. :type verbose: bool, optional :returns: A list of lists containing normalized gene expression matrices. Each matrix in the list is a numpy array or a torch tensor. :rtype: List[List[Union[np.ndarray, torch.Tensor]]] .. py:function:: align_preprocess(samples: List[anndata.AnnData], rep_layer: Union[str, List[str]] = 'X', rep_field: Union[str, List[str]] = 'layer', genes: Optional[Union[list, numpy.ndarray]] = None, spatial_key: str = 'spatial', label_transfer_dict: Optional[Union[dict, List[dict]]] = None, normalize_c: bool = False, normalize_g: bool = False, dtype: str = 'float64', device: str = 'cpu', verbose: bool = True) -> Tuple[Union[ot.backend.TorchBackend, ot.backend.NumpyBackend], Union[torch.Tensor, numpy.ndarray], List[List[Union[numpy.ndarray, torch.Tensor]]], List[Union[numpy.ndarray, torch.Tensor]], Union[torch.Tensor, numpy.ndarray], Union[torch.Tensor, numpy.ndarray], Union[torch.Tensor, numpy.ndarray]] Preprocess the data before alignment. :param samples: A list of AnnData objects containing the data samples. :type samples: List[AnnData] :param genes: Genes used for calculation. If None, use all common genes for calculation. Default is None. :type genes: Optional[Union[list, np.ndarray]], optional :param spatial_key: The key in `.obsm` that corresponds to the raw spatial coordinates. Default is "spatial". :type spatial_key: str, optional :param layer: If 'X', uses `sample.X` to calculate dissimilarity between spots, otherwise uses the representation given by `sample.layers[layer]`. Default is "X". :type layer: str, optional :param use_rep: Specify the representation to use. If None, do not use the representation. :type use_rep: Optional[Union[str, List[str]]], optional :param rep_type: Specify the type of representation. Accept types: "obs" and "obsm". If None, use the "obsm" type. :type rep_type: Optional[Union[str, List[str]]], optional :param normalize_c: Whether to normalize spatial coordinates. Default is False. :type normalize_c: bool, optional :param normalize_g: Whether to normalize gene expression. Default is False. :type normalize_g: bool, optional :param dtype: The floating-point number type. Only float32 and float64 are allowed. Default is "float64". :type dtype: str, optional :param device: The device used to run the program. Can specify the GPU to use, e.g., '0'. Default is "cpu". :type device: str, optional :param verbose: If True, print progress updates. Default is True. :type verbose: bool, optional :returns: A tuple containing the following elements: - backend: The backend used for computations (TorchBackend or NumpyBackend). - type_as: The type used for computations which contains the dtype and device. - exp_layers: A list of processed expression layers. - spatial_coords: A list of spatial coordinates. - normalize_scales: Optional scaling factors for normalization. - normalize_means: Optional mean values for normalization. :rtype: Tuple :raises ValueError: If the specified representation is not found in the attributes of the AnnData objects. :raises AssertionError: If the spatial coordinate dimensions are different. .. py:function:: guidance_pair_preprocess(nx: Union[ot.backend.TorchBackend, ot.backend.NumpyBackend], type_as: Union[torch.Tensor, numpy.ndarray], guidance_pair: List[numpy.ndarray], normalize_scales: Union[torch.Tensor, numpy.ndarray], normalize_means: Union[torch.Tensor, numpy.ndarray]) -> List[Union[torch.Tensor, numpy.ndarray]] Preprocess guidance pairs by normalizing them. :param nx: Backend module for computations (e.g., numpy or torch). Defaults to `ot.backend.NumpyBackend`. :type nx: Union[ot.backend.TorchBackend, ot.backend.NumpyBackend], optional :param type_as: Type to which the output should be cast. :type type_as: Union[torch.Tensor, np.ndarray] :param guidance_pair: List containing the guidance pairs as numpy arrays. :type guidance_pair: List[np.ndarray] :param normalize_scales: Tensor or array of normalization scales. :type normalize_scales: Union[torch.Tensor, np.ndarray] :param normalize_means: Tensor or array of normalization means. :type normalize_means: Union[torch.Tensor, np.ndarray] :returns: List containing the normalized guidance pairs. :rtype: List[Union[torch.Tensor, np.ndarray]] .. py:function:: _kl_distance_backend(X: Union[numpy.ndarray, torch.Tensor], Y: Union[numpy.ndarray, torch.Tensor], probabilistic: bool = True, eps: float = 1e-08) -> Union[numpy.ndarray, torch.Tensor] Compute the pairwise KL divergence between all pairs of samples in matrices X and Y. :param X: Matrix with shape (N, D), where each row represents a sample. :type X: np.ndarray or torch.Tensor :param Y: Matrix with shape (M, D), where each row represents a sample. :type Y: np.ndarray or torch.Tensor :param probabilistic: If True, normalize the rows of X and Y to sum to 1 (to interpret them as probabilities). Default is True. :type probabilistic: bool, optional :param eps: A small value to avoid division by zero. Default is 1e-8. :type eps: float, optional :returns: Pairwise KL divergence matrix with shape (N, M). :rtype: np.ndarray :raises AssertionError: If the number of features in X and Y do not match. .. py:function:: _cosine_distance_backend(X: Union[numpy.ndarray, torch.Tensor], Y: Union[numpy.ndarray, torch.Tensor], eps: float = 1e-08) -> Union[numpy.ndarray, torch.Tensor] Compute the pairwise cosine similarity between all pairs of samples in matrices X and Y. :param X: Matrix with shape (N, D), where each row represents a sample. :type X: np.ndarray or torch.Tensor :param Y: Matrix with shape (M, D), where each row represents a sample. :type Y: np.ndarray or torch.Tensor :param eps: A small value to avoid division by zero. Default is 1e-8. :type eps: float, optional :returns: Pairwise cosine similarity matrix with shape (N, M). :rtype: np.ndarray or torch.Tensor :raises AssertionError: If the number of features in X and Y do not match. .. py:function:: _euc_distance_backend(X: Union[numpy.ndarray, torch.Tensor], Y: Union[numpy.ndarray, torch.Tensor], squared: bool = True) -> Union[numpy.ndarray, torch.Tensor] Compute the pairwise Euclidean distance between all pairs of samples in matrices X and Y. :param X: Matrix with shape (N, D), where each row represents a sample. :type X: np.ndarray or torch.Tensor :param Y: Matrix with shape (M, D), where each row represents a sample. :type Y: np.ndarray or torch.Tensor :param squared: If True, return squared Euclidean distances. Default is True. :type squared: bool, optional :returns: Pairwise Euclidean distance matrix with shape (N, M). :rtype: np.ndarray or torch.Tensor :raises AssertionError: If the number of features in X and Y do not match. .. py:function:: _label_distance_backend(X: Union[numpy.ndarray, torch.Tensor], Y: Union[numpy.ndarray, torch.Tensor], label_transfer: Union[numpy.ndarray, torch.Tensor]) -> Union[numpy.ndarray, torch.Tensor] Generate a matrix of size (N, M) by indexing into the label_transfer matrix using the values in X and Y. :param X: Array with shape (N, ) containing integer values ranging from 0 to K. :type X: np.ndarray or torch.Tensor :param Y: Array with shape (M, ) containing integer values ranging from 0 to L. :type Y: np.ndarray or torch.Tensor :param label_transfer: Matrix with shape (K, L) containing the label transfer cost. :type label_transfer: np.ndarray or torch.Tensor :returns: Matrix with shape (N, M) where each element is the value from label_transfer indexed by the corresponding values in X and Y. :rtype: np.ndarray or torch.Tensor :raises AssertionError: If the shape of X or Y is not one-dimensional or if they contain non-integer values. .. py:function:: _correlation_distance_backend(X, Y) .. py:function:: _jaccard_distance_backend(X, Y) .. py:function:: _chebyshev_distance_backend(X, Y) .. py:function:: _canberra_distance_backend(X, Y) .. py:function:: _braycurtis_distance_backend(X, Y) .. py:function:: _hamming_distance_backend(X, Y) .. py:function:: _minkowski_distance_backend(X, Y) .. py:function:: calc_distance(X: Union[List[Union[numpy.ndarray, torch.Tensor]], Union[numpy.ndarray, torch.Tensor]], Y: Union[List[Union[numpy.ndarray, torch.Tensor]], Union[numpy.ndarray, torch.Tensor]], metric: Union[List[str], str] = 'euc', label_transfer: Optional[Union[numpy.ndarray, torch.Tensor]] = None) -> Union[numpy.ndarray, torch.Tensor] Calculate the distance between all pairs of samples in matrices X and Y using the specified metric. :param X: Matrix with shape (N, D), where each row represents a sample. :type X: np.ndarray or torch.Tensor :param Y: Matrix with shape (M, D), where each row represents a sample. :type Y: np.ndarray or torch.Tensor :param metric: The metric to use for calculating distances. Options are 'euc', 'euclidean', 'square_euc', 'square_euclidean', 'kl', 'sym_kl', 'cos', 'cosine', 'label'. Default is 'euc'. :type metric: str, optional :param label_transfer: Matrix with shape (K, L) containing the label transfer cost. Required if metric is 'label'. Default is None. :type label_transfer: Optional[np.ndarray or torch.Tensor], optional :returns: Pairwise distance matrix with shape (N, M). :rtype: np.ndarray or torch.Tensor :raises AssertionError: If the number of features in X and Y do not match. If `metric` is not one of the supported metrics. If `label_transfer` is required but not provided. .. py:function:: calc_probability(distance_matrix: Union[numpy.ndarray, torch.Tensor], probability_type: str = 'gauss', probability_parameter: Optional[float] = None) -> Union[numpy.ndarray, torch.Tensor] Calculate probability based on the distance matrix and specified probability type. :param distance_matrix: The distance matrix. :type distance_matrix: np.ndarray or torch.Tensor :param probability_type: The type of probability to calculate. Options are 'Gauss', 'cos_prob', and 'prob'. Default is 'Gauss'. :type probability_type: str, optional :param probability_parameter: The parameter for the probability calculation. Required for certain probability types. Default is None. :type probability_parameter: Optional[float], optional :returns: The calculated probability matrix. :rtype: np.ndarray or torch.Tensor :raises ValueError: If `probability_type` is not one of the supported types or if required parameters are missing. .. py:function:: get_P_core(nx: Union[ot.backend.TorchBackend, ot.backend.NumpyBackend], type_as: Union[torch.Tensor, numpy.ndarray], Dim: Union[torch.Tensor, numpy.ndarray], spatial_dist: Union[numpy.ndarray, torch.Tensor], exp_dist: List[Union[numpy.ndarray, torch.Tensor]], sigma2: Union[int, float, numpy.ndarray, torch.Tensor], model_mul: Union[numpy.ndarray, torch.Tensor], gamma: Union[float, numpy.ndarray, torch.Tensor], samples_s: Optional[List[float]] = None, sigma2_variance: float = 1, probability_type: Union[str, List[str]] = 'Gauss', probability_parameters: Optional[List] = None, eps: float = 1e-08, sparse_calculation_mode: bool = False, top_k: int = -1) Compute assignment matrix P and additional results based on given distances and parameters. :param nx: Backend module (e.g., numpy or torch). :type nx: module :param type_as: Type to which the output should be cast. :type type_as: type :param spatial_dist: Spatial distance matrix. :type spatial_dist: np.ndarray or torch.Tensor :param exp_dist: List of expression distance matrices. :type exp_dist: List[np.ndarray or torch.Tensor] :param sigma2: Sigma squared value. :type sigma2: int, float, np.ndarray or torch.Tensor :param alpha: Alpha values. :type alpha: np.ndarray or torch.Tensor :param gamma: Gamma value. :type gamma: float, np.ndarray or torch.Tensor :param Sigma: Sigma values. :type Sigma: np.ndarray or torch.Tensor :param samples_s: Samples. Default is None. :type samples_s: Optional[List[float]], optional :param sigma2_variance: Sigma squared variance. Default is 1. :type sigma2_variance: float, optional :param probability_type: Probability type. Default is 'Gauss'. :type probability_type: Union[str, List[str]], optional :param probability_parameters: Probability parameters. Default is None. :type probability_parameters: Optional[List[float]], optional :returns: * *np.ndarray or torch.Tensor* -- Assignment matrix P. * *dict* -- Additional results. .. py:function:: get_P(nx: Union[ot.backend.TorchBackend, ot.backend.NumpyBackend], type_as: Union[torch.Tensor, numpy.ndarray], Dim: Union[torch.Tensor, numpy.ndarray], spatial_dist: Union[numpy.ndarray, torch.Tensor], exp_dist: List[Union[numpy.ndarray, torch.Tensor]], sigma2: Union[int, float, numpy.ndarray, torch.Tensor], alpha: Union[numpy.ndarray, torch.Tensor], gamma: Union[float, numpy.ndarray, torch.Tensor], Sigma: Union[numpy.ndarray, torch.Tensor], samples_s: Optional[List[float]] = None, sigma2_variance: float = 1, probability_type: Union[str, List[str]] = 'Gauss', probability_parameters: Optional[List[float]] = None) .. py:function:: get_P_sparse(nx: Union[ot.backend.TorchBackend, ot.backend.NumpyBackend], type_as: Union[torch.Tensor, numpy.ndarray], Dim: Union[torch.Tensor, numpy.ndarray], spatial_XA: Union[numpy.ndarray, torch.Tensor], spatial_XB: Union[numpy.ndarray, torch.Tensor], exp_layer_A: List[Union[numpy.ndarray, torch.Tensor]], exp_layer_B: List[Union[numpy.ndarray, torch.Tensor]], label_transfer: Union[numpy.ndarray, torch.Tensor], sigma2: Union[int, float, numpy.ndarray, torch.Tensor], alpha: Union[numpy.ndarray, torch.Tensor], gamma: Union[float, numpy.ndarray, torch.Tensor], Sigma: Union[numpy.ndarray, torch.Tensor], samples_s: Optional[List[float]] = None, sigma2_variance: float = 1, probability_type: Union[str, List[str]] = 'Gauss', probability_parameters: Optional[List[float]] = None, use_chunk: bool = False, chunk_capacity_scale: int = 1, top_k: int = 1024, metrics: Union[str, List[str]] = 'kl') Calculate sparse assignment matrix P using spatial and expression / representation distances. :param nx: Backend module (e.g., numpy or torch). :type nx: module :param type_as: Type to which the output should be cast. :type type_as: type :param Dim: Dimensionality of the spatial data. :type Dim: int :param spatial_XA: Spatial coordinates of sample A. :type spatial_XA: np.ndarray or torch.Tensor :param spatial_XB: Spatial coordinates of sample B. :type spatial_XB: np.ndarray or torch.Tensor :param exp_layer_A: Expression / representation data of sample A. :type exp_layer_A: np.ndarray or torch.Tensor :param exp_layer_B: Expression / representation data of sample B. :type exp_layer_B: np.ndarray or torch.Tensor :param label_transfer: Label transfer cost matrix. :type label_transfer: np.ndarray or torch.Tensor :param sigma2: Sigma squared value. :type sigma2: int, float, np.ndarray or torch.Tensor :param alpha: Alpha values. :type alpha: np.ndarray or torch.Tensor :param gamma: Gamma value. :type gamma: float, np.ndarray or torch.Tensor :param Sigma: Sigma values. :type Sigma: np.ndarray or torch.Tensor :param samples_s: Samples. Default is None. :type samples_s: Optional[List[float]], optional :param sigma2_variance: Sigma squared variance. Default is 1. :type sigma2_variance: float, optional :param probability_type: Probability type. Default is 'Gauss'. :type probability_type: Union[str, List[str]], optional :param probability_parameters: Probability parameters. Default is None. :type probability_parameters: Optional[List[float]], optional :param use_chunk: Whether to use chunking for large datasets. Default is False. :type use_chunk: bool, optional :param chunk_capacity_scale: Scale factor for chunk capacity. Default is 1. :type chunk_capacity_scale: int, optional :param top_k: Number of top elements to keep in the sparse matrix. Default is 1024. :type top_k: int, optional :param metrics: Distance metrics to use. Default is 'kl'. :type metrics: Union[str, List[str]], optional :returns: * *Union[np.ndarray, torch.Tensor]* -- Sparse assignment matrix P. * *dict* -- Additional results. .. py:function:: update_Sp(nx, type_as, step_size, batch_size, SVI_mode, assignment_results, Sp, Sp_spatial, Sp_sigma2) .. py:function:: update_gamma(nx, type_as, gamma, batch_size, gamma_a, gamma_b, Sp_spatial, SVI_mode) .. py:function:: update_alpha(nx, type_as, step_size, alpha, kappa, NA, assignment_results) .. py:function:: update_nonrigid(nx, type_as, SVI_mode, guidance_effect, SigmaInv, step_size, sigma2, lambdaVF, GammaSparse, U, K_NA, PXB_term, P, coordsB, RnA, guidance_epsilon, U_I, R_AI, X_BI) .. py:function:: update_rigid(nx, type_as) .. py:function:: update_sigma2() .. py:function:: update_assignment_P() .. py:function:: con_K(X: Union[numpy.ndarray, torch.Tensor], Y: Union[numpy.ndarray, torch.Tensor], beta: Union[int, float] = 0.01) -> Union[numpy.ndarray, torch.Tensor] con_K constructs the Squared Exponential (SE) kernel, where K(i,j)=k(X_i,Y_j)=exp(-beta*||X_i-Y_j||^2). :param X: The first vector X\in\mathbb{R}^{N imes d} :param Y: The second vector X\in\mathbb{R}^{M imes d} :param beta: The length-scale of the SE kernel. :param use_chunk: Whether to use chunk to reduce the GPU memory usage. Note that if set to ``True'' it will slow down the calculation. Defaults to False. :type use_chunk: bool, optional :returns: The kernel K\in\mathbb{R}^{N imes M} :rtype: K .. py:function:: con_K_geodist() .. py:function:: get_kernel(spatial_coords: Union[numpy.ndarray, torch.Tensor], inducing_variables_num: int, kernel_bandwidth: float, sampling_method: str, kernel_type: str = 'euc', add_evaluation_points: Optional[Union[numpy.ndarray, torch.Tensor]] = None) -> Tuple[Union[numpy.ndarray, torch.Tensor], Union[numpy.ndarray, torch.Tensor], Union[numpy.ndarray, torch.Tensor], Union[numpy.ndarray, torch.Tensor], Optional[Union[numpy.ndarray, torch.Tensor]]] Construct a kernel matrix for spatial data. :param spatial_coords: The spatial coordinates of the data points. :type spatial_coords: Union[np.ndarray, torch.Tensor] :param inducing_variables_num: The number of inducing variables to sample. :type inducing_variables_num: int :param sampling_method: The method to use for sampling inducing variables. :type sampling_method: str :param kernel_bandwidth: The bandwidth parameter for the kernel. :type kernel_bandwidth: float :param kernel_type: The type of kernel to construct. Currently supports "euc". :type kernel_type: str :param add_evaluation_points: Additional points to evaluate the kernel at. Defaults to None. :type add_evaluation_points: Optional[Union[np.ndarray, torch.Tensor]], optional :returns: A tuple containing the inducing variables, their indices, the sparse kernel matrix, the kernel matrix for spatial coordinates, and the evaluation kernel matrix (if provided). :rtype: Tuple[Union[np.ndarray, torch.Tensor], Union[np.ndarray, torch.Tensor], Union[np.ndarray, torch.Tensor], Union[np.ndarray, torch.Tensor], Optional[Union[np.ndarray, torch.Tensor]]] :raises NotImplementedError: If the specified kernel type is not implemented. .. py:function:: kl_divergence_backend(X, Y, probabilistic=True) Returns pairwise KL divergence (over all pairs of samples) of two matrices X and Y. Takes advantage of POT backend to speed up computation. :param X: np array with dim (n_samples by n_features) :param Y: np array with dim (m_samples by n_features) :returns: np array with dim (n_samples by m_samples). Pairwise KL divergence matrix. :rtype: D .. py:function:: kl_distance(X_A: Union[numpy.ndarray, torch.Tensor], X_B: Union[numpy.ndarray, torch.Tensor], use_gpu: bool = True, chunk_num: int = 1, symmetry: bool = True) -> Union[numpy.ndarray, torch.Tensor] Calculate the KL distance between two vectors :param X_A: The first input vector with shape n x d :type X_A: Union[np.ndarray, torch.Tensor] :param X_B: The second input vector with shape m x d :type X_B: Union[np.ndarray, torch.Tensor] :param use_gpu: Whether to use GPU for chunk. Defaults to True. :type use_gpu: bool, optional :param chunk_num: The number of chunks. The larger the number, the smaller the GPU memory usage, but the slower the calculation speed. Defaults to 20. :type chunk_num: int, optional :param symmetry: Whether to use symmetric KL divergence. Defaults to True. :type symmetry: bool, optional :returns: KL distance matrix of two vectors with shape n x m. :rtype: Union[np.ndarray, torch.Tensor] .. py:function:: calc_exp_dissimilarity(X_A: Union[numpy.ndarray, torch.Tensor], X_B: Union[numpy.ndarray, torch.Tensor], dissimilarity: str = 'kl', chunk_num: int = 1) -> Union[numpy.ndarray, torch.Tensor] Calculate expression dissimilarity. :param X_A: Gene expression matrix of sample A. :param X_B: Gene expression matrix of sample B. :param dissimilarity: Expression dissimilarity measure: ``'kl'``, ``'euclidean'``, ``'euc'``, ``'cos'``, or ``'cosine'``. :returns: The dissimilarity matrix of two feature samples. :rtype: Union[np.ndarray, torch.Tensor] .. py:function:: cal_dist(X_A: Union[numpy.ndarray, torch.Tensor], X_B: Union[numpy.ndarray, torch.Tensor], use_gpu: bool = True, chunk_num: int = 1, return_gpu: bool = True) -> Union[numpy.ndarray, torch.Tensor] Calculate the distance between two vectors :param X_A: The first input vector with shape n x d :type X_A: Union[np.ndarray, torch.Tensor] :param X_B: The second input vector with shape m x d :type X_B: Union[np.ndarray, torch.Tensor] :param use_gpu: Whether to use GPU for chunk. Defaults to True. :type use_gpu: bool, optional :param chunk_num: The number of chunks. The larger the number, the smaller the GPU memory usage, but the slower the calculation speed. Defaults to 1. :type chunk_num: int, optional :returns: Distance matrix of two vectors with shape n x m. :rtype: Union[np.ndarray, torch.Tensor] .. py:function:: cal_dot(mat1: Union[numpy.ndarray, torch.Tensor], mat2: Union[numpy.ndarray, torch.Tensor], use_chunk: bool = False, use_gpu: bool = True, chunk_num: int = 20) -> Union[numpy.ndarray, torch.Tensor] Calculate the matrix multiplication of two matrices :param mat1: The first input matrix with shape n x d :type mat1: Union[np.ndarray, torch.Tensor] :param mat2: The second input matrix with shape d x m. We suppose m << n and does not require chunk. :type mat2: Union[np.ndarray, torch.Tensor] :param use_chunk: Whether to use chunk to reduce the GPU memory usage. Note that if set to ``True'' it will slow down the calculation. Defaults to False. :type use_chunk: bool, optional :param use_gpu: Whether to use GPU for chunk. Defaults to True. :type use_gpu: bool, optional :param chunk_num: The number of chunks. The larger the number, the smaller the GPU memory usage, but the slower the calculation speed. Defaults to 20. :type chunk_num: int, optional :returns: Matrix multiplication result with shape n x m :rtype: Union[np.ndarray, torch.Tensor] .. py:function:: get_optimal_R(coordsA: Union[numpy.ndarray, torch.Tensor], coordsB: Union[numpy.ndarray, torch.Tensor], P: Union[numpy.ndarray, torch.Tensor], R_init: Union[numpy.ndarray, torch.Tensor]) Get the optimal rotation matrix R :param coordsA: The first input matrix with shape n x d :type coordsA: Union[np.ndarray, torch.Tensor] :param coordsB: The second input matrix with shape n x d :type coordsB: Union[np.ndarray, torch.Tensor] :param P: The optimal transport matrix with shape n x n :type P: Union[np.ndarray, torch.Tensor] :returns: The optimal rotation matrix R with shape d x d :rtype: Union[np.ndarray, torch.Tensor] .. py:function:: _cal_cosine_similarity(tensor1, tensor2, dim=1, eps=1e-08) .. py:function:: _cos_similarity(mat1: Union[numpy.ndarray, torch.Tensor], mat2: Union[numpy.ndarray, torch.Tensor]) .. py:function:: _dist(mat1: Union[numpy.ndarray, torch.Tensor], mat2: Union[numpy.ndarray, torch.Tensor], metric: str = 'euc') -> Union[numpy.ndarray, torch.Tensor] .. py:function:: coarse_rigid_alignment(nx, type_as, samples: List[anndata.AnnData], coordsA: Union[numpy.ndarray, torch.Tensor], coordsB: Union[numpy.ndarray, torch.Tensor], init_layer: str = 'X', init_field: str = 'layer', genes: Optional[Union[list, numpy.ndarray]] = None, top_K: int = 10, allow_flip: bool = False, verbose: bool = True, n_sampling: Optional[int] = 20000) .. py:function:: coarse_rigid_alignment(coordsA: Union[numpy.ndarray, torch.Tensor], coordsB: Union[numpy.ndarray, torch.Tensor], X_A: Union[numpy.ndarray, torch.Tensor], X_B: Union[numpy.ndarray, torch.Tensor], transformed_points: Optional[Union[numpy.ndarray, torch.Tensor]] = None, dissimilarity: str = 'kl', top_K: int = 10, allow_flip: bool = False, verbose: bool = True) -> Tuple[Any, Any, Any, Any, Union[numpy.ndarray, Any], Union[numpy.ndarray, Any]] .. py:function:: inlier_from_NN(train_x, train_y, distance) .. py:function:: coarse_rigid_alignment_debug(coordsA: Union[numpy.ndarray, torch.Tensor], coordsB: Union[numpy.ndarray, torch.Tensor], DistMat: Union[numpy.ndarray, torch.Tensor], nx: Union[ot.backend.TorchBackend, ot.backend.NumpyBackend], sub_sample_num: int = -1, top_K: int = 10, transformed_points: Optional[Union[numpy.ndarray, torch.Tensor]] = None) -> Union[numpy.ndarray, torch.Tensor] .. py:function:: inlier_from_NN_debug(train_x, train_y, distance) .. py:function:: voxel_data(nx: Union[ot.backend.TorchBackend, ot.backend.NumpyBackend], coords: Union[numpy.ndarray, torch.Tensor], gene_exp: Union[numpy.ndarray, torch.Tensor], voxel_size: Optional[float] = None, voxel_num: Optional[int] = 10000) Voxelization of the data. :param coords: The coordinates of the data points. :type coords: np.ndarray or torch.Tensor :param gene_exp: The gene expression of the data points. :type gene_exp: np.ndarray or torch.Tensor :param voxel_size: The size of the voxel. :type voxel_size: float :param voxel_num: The number of voxels. :type voxel_num: int :returns: * **voxel_coords** (*np.ndarray or torch.Tensor*) -- The coordinates of the voxels. * **voxel_gene_exp** (*np.ndarray or torch.Tensor*) -- The gene expression of the voxels. .. py:function:: _init_guess_sigma2(XA, XB, subsample=20000) .. py:function:: _init_probability_parameters(exp_layer_A: List[Union[numpy.ndarray, torch.Tensor]], exp_layer_B: List[Union[numpy.ndarray, torch.Tensor]], dissimilarity: Union[str, List[str]] = 'kl', probability_type: Union[str, List[str]] = 'gauss', probability_parameters: Optional[Union[float, List[float]]] = None, subsample=20000) Initialize probability parameters for the given expression layers. :param exp_layer_A: List of expression layers for dataset A. :type exp_layer_A: List[np.ndarray] :param exp_layer_B: List of expression layers for dataset B. :type exp_layer_B: List[np.ndarray] :param dissimilarity: List of dissimilarity metrics. :type dissimilarity: List[str] :param probability_type: List of probability types. :type probability_type: List[str] :param probability_parameters: List of probability parameters to be initialized. :type probability_parameters: List[Optional[float]] :param subsample: Number of subsamples to use. Defaults to 20000. :type subsample: int, optional :returns: List of initialized probability parameters. :rtype: List[Optional[float]] .. py:function:: _get_anneling_factor(nx, type_as, start, end, iter) .. py:function:: _init_guess_beta2(nx, XA, XB, dissimilarity='kl', partial_robust_level=1, beta2=None, beta2_end=None, subsample=5000, verbose=False) .. py:function:: _dense_to_sparse(mat: Union[numpy.ndarray, torch.Tensor], sparse_method: str = 'topk', threshold: Union[int, float] = 100, axis: int = 0, descending=False) .. py:function:: empty_cache(device: str = 'cpu') .. py:data:: nx_torch .. py:data:: _cat .. py:data:: _unique .. py:data:: _var .. py:data:: _data .. py:data:: _unsqueeze .. py:data:: _mul .. py:data:: _power .. py:data:: _psi .. py:data:: _pinv .. py:data:: _dot .. py:data:: _identity .. py:data:: _linalg .. py:data:: _prod .. py:data:: _pi .. py:data:: _chunk .. py:data:: _randperm .. py:data:: _roll .. py:data:: _choice .. py:data:: _topk .. py:data:: _dstack .. py:data:: _vstack .. py:data:: _hstack .. py:data:: _split .. py:function:: torch_like_split(arr, size, dim=0)