spateo.tdr.interpolations.interpolation_deeplearn

Submodules

Classes

DataSampler

This module loads and retains the data pairs (X, Y) and delivers the batches of them to the DeepInterpolation

DeepInterpolation

Package Contents

class spateo.tdr.interpolations.interpolation_deeplearn.DataSampler(path_to_data: str | None = None, data: anndata.AnnData | dict | None = None, skey: str = 'spatial', ekey: str = 'M_s', wkey: str | None = None, normalize_data: bool = False, number_of_random_samples: str = 'all', weighted: bool = False)[source]

Bases: object

This module loads and retains the data pairs (X, Y) and delivers the batches of them to the DeepInterpolation module upon calling. The module can load tha data from a .mat file. The file must contain two 2D matrices X and Y with equal rows.

X: The spatial coordinates of each cell / binning / segmentation. Y: The expression values at the corresponding coordinates X.

generate_batch(batch_size: int, sample_subset_indices: str = 'all')[source]

Generate random batches of the given size “batch_size” from the (X, Y) sample pairs.

Parameters:
batch_size

If the batch_size is set to “all”, all the samples will be returned.

sample_subset_indices

This argument is used when you want to further subset the samples (based on the factors such as quality of the samples). If set to “all”, it means it won’t filter out samples based on their qualities.

class spateo.tdr.interpolations.interpolation_deeplearn.DeepInterpolation(model: types.ModuleType, data_sampler: object, sirens: bool = False, enforce_positivity: bool = False, loss_function: Callable | None = weighted_mse(), smoothing_factor: float | None = True, stability_factor: float | None = True, load_model_from_buffer: bool = False, buffer_path: str = 'model_buffer/', hidden_features: int = 256, hidden_layers: int = 3, first_omega_0: float = 30.0, hidden_omega_0: float = 30.0, **kwargs)[source]
buffer_path
smoothing_factor
stability_factor
loss_function
loss_traj = []
autoencoder_loss_traj = []
data_sampler
normalization_factor
data_dim
input_network_dim
output_network_dim
high2low(high_batch)[source]
low2high(low_batch)[source]
predict(input_x=None, to_numpy=True)[source]
train(max_iter: int, data_batch_size: int, autoencoder_batch_size: int, data_lr: float, autoencoder_lr: float, sample_fraction: float = 1, iter_per_sample_update: int | None = None)[source]

The training method for the DeepInterpolation model object

Parameters:
max_iter

The maximum iteration the network will be trained.

data_batch_size

The size of the data sample batches to be generated in each iteration.

autoencoder_batch_size

The size of the auto-encoder training batches to be generated in each iteration. Must be no greater than batch_size. .

data_lr

The learning rate for network training.

autoencoder_lr

The learning rate for network training the auto-encoder. Will have no effect if network_dim equal data_dim.

sample_fraction

The best sample fraction to be filtered out of the velocity samples.

iter_per_sample_update

The frequency of updating the subset of best samples (in terms of per iterations). Will have no effect if velocity_sample_fraction and time_course_sample_fraction are set to 1.

save()[source]
load()[source]