spateo.tdr.interpolations.interpolation_deeplearn ================================================= .. py:module:: spateo.tdr.interpolations.interpolation_deeplearn Submodules ---------- .. toctree:: :maxdepth: 1 /autoapi/spateo/tdr/interpolations/interpolation_deeplearn/deep_interpolation/index /autoapi/spateo/tdr/interpolations/interpolation_deeplearn/interpolation_nn/index /autoapi/spateo/tdr/interpolations/interpolation_deeplearn/nn_losses/index Classes ------- .. autoapisummary:: spateo.tdr.interpolations.interpolation_deeplearn.DataSampler spateo.tdr.interpolations.interpolation_deeplearn.DeepInterpolation Package Contents ---------------- .. py:class:: DataSampler(path_to_data: Union[str, None] = None, data: Union[anndata.AnnData, dict, None] = None, skey: str = 'spatial', ekey: str = 'M_s', wkey: Union[str, None] = None, normalize_data: bool = False, number_of_random_samples: str = 'all', weighted: bool = False) Bases: :py:obj:`object` This module loads and retains the data pairs (X, Y) and delivers the batches of them to the DeepInterpolation module upon calling. The module can load tha data from a .mat file. The file must contain two 2D matrices X and Y with equal rows. X: The spatial coordinates of each cell / binning / segmentation. Y: The expression values at the corresponding coordinates X. .. py:method:: generate_batch(batch_size: int, sample_subset_indices: str = 'all') Generate random batches of the given size "batch_size" from the (X, Y) sample pairs. :param batch_size: If the batch_size is set to "all", all the samples will be returned. :param sample_subset_indices: This argument is used when you want to further subset the samples (based on the factors such as quality of the samples). If set to "all", it means it won't filter out samples based on their qualities. .. py:class:: DeepInterpolation(model: types.ModuleType, data_sampler: object, sirens: bool = False, enforce_positivity: bool = False, loss_function: Union[Callable, None] = weighted_mse(), smoothing_factor: Union[float, None] = True, stability_factor: Union[float, None] = True, load_model_from_buffer: bool = False, buffer_path: str = 'model_buffer/', hidden_features: int = 256, hidden_layers: int = 3, first_omega_0: float = 30.0, hidden_omega_0: float = 30.0, **kwargs) .. py:attribute:: buffer_path :value: 'model_buffer/' .. py:attribute:: smoothing_factor :value: True .. py:attribute:: stability_factor :value: True .. py:attribute:: loss_function .. py:attribute:: loss_traj :value: [] .. py:attribute:: autoencoder_loss_traj :value: [] .. py:attribute:: data_sampler .. py:attribute:: normalization_factor .. py:attribute:: data_dim .. py:attribute:: input_network_dim .. py:attribute:: output_network_dim .. py:method:: high2low(high_batch) .. py:method:: low2high(low_batch) .. py:method:: predict(input_x=None, to_numpy=True) .. py:method:: train(max_iter: int, data_batch_size: int, autoencoder_batch_size: int, data_lr: float, autoencoder_lr: float, sample_fraction: float = 1, iter_per_sample_update: Union[int, None] = None) The training method for the DeepInterpolation model object :param max_iter: The maximum iteration the network will be trained. :param data_batch_size: The size of the data sample batches to be generated in each iteration. :param autoencoder_batch_size: The size of the auto-encoder training batches to be generated in each iteration. Must be no greater than batch_size. . :param data_lr: The learning rate for network training. :param autoencoder_lr: The learning rate for network training the auto-encoder. Will have no effect if network_dim equal data_dim. :param sample_fraction: The best sample fraction to be filtered out of the velocity samples. :param iter_per_sample_update: The frequency of updating the subset of best samples (in terms of per iterations). Will have no effect if velocity_sample_fraction and time_course_sample_fraction are set to 1. .. py:method:: save() .. py:method:: load()