spateo.tdr.interpolations.interpolation_deeplearn.interpolation_nn#

Module Contents#

Classes#

A

Base class for all neural network modules.

B

Base class for all neural network modules.

SineLayer

As discussed above, we aim to provide each sine nonlinearity with activations that are standard

h

Base class for all neural network modules.

MainFlow

Base class for all neural network modules.

class spateo.tdr.interpolations.interpolation_deeplearn.interpolation_nn.A(network_dim, data_dim, hidden_features=256, hidden_layers=1, activation_function=torch.nn.functional.leaky_relu)[source]#

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:
training bool

Boolean represents whether this module is in training or evaluation mode.

forward(inp)[source]#
class spateo.tdr.interpolations.interpolation_deeplearn.interpolation_nn.B(network_dim, data_dim, hidden_features=256, hidden_layers=3, activation_function=torch.nn.functional.leaky_relu)[source]#

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:
training bool

Boolean represents whether this module is in training or evaluation mode.

forward(inp)[source]#
class spateo.tdr.interpolations.interpolation_deeplearn.interpolation_nn.SineLayer(in_features, out_features, bias=True, is_first=False, omega_0=30.0)[source]#

Bases: torch.nn.Module

As discussed above, we aim to provide each sine nonlinearity with activations that are standard normal distributed, except in the case of the first layer, where we introduced a factor ω0 that increased the spatial frequency of the first layer to better match the frequency spectrum of the signal. However, we found that the training of SIREN can be accelerated by leveraging a factor ω0 in all layers of the SIREN, by factorizing the weight matrix W as W = Wˆ ∗ ω0, choosing. This keeps the distribution of activations constant, but boosts gradients to the weight matrix Wˆ by the factor ω0 while leaving gradients w.r.t. the input of the sine neuron unchanged

init_weights()[source]#
forward(input)[source]#
forward_with_intermediate(input)[source]#
class spateo.tdr.interpolations.interpolation_deeplearn.interpolation_nn.h(input_network_dim, output_network_dim, hidden_features=256, hidden_layers=3, sirens=False, first_omega_0=30.0, hidden_omega_0=30.0)[source]#

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:
training bool

Boolean represents whether this module is in training or evaluation mode.

forward(inp)[source]#
class spateo.tdr.interpolations.interpolation_deeplearn.interpolation_nn.MainFlow(h, A=None, B=None, enforce_positivity=False)[source]#

Bases: torch.nn.Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes:

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super().__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call to(), etc.

Note

As per the example above, an __init__() call to the parent class must be made before assignment on the child.

Variables:
training bool

Boolean represents whether this module is in training or evaluation mode.

forward(t, x, freeze=None)[source]#