spateo.external.STAGATE_pyG.gat_conv

Classes

GATConv

The graph attentional operator from the `"Graph Attention Networks"

Module Contents

class spateo.external.STAGATE_pyG.gat_conv.GATConv(in_channels: int | Tuple[int, int], out_channels: int, heads: int = 1, concat: bool = True, negative_slope: float = 0.2, dropout: float = 0.0, add_self_loops: bool = True, bias: bool = True, **kwargs)[source]

Bases: torch_geometric.nn.conv.MessagePassing

The graph attentional operator from the “Graph Attention Networks” paper

\[\mathbf{x}^{\prime}_i = \alpha_{i,i}\mathbf{\Theta}\mathbf{x}_{i} + \sum_{j \in \mathcal{N}(i)} \alpha_{i,j}\mathbf{\Theta}\mathbf{x}_{j},\]

where the attention coefficients \(\alpha_{i,j}\) are computed as

\[\alpha_{i,j} = \frac{ \exp\left(\mathrm{LeakyReLU}\left(\mathbf{a}^{\top} [\mathbf{\Theta}\mathbf{x}_i \, \Vert \, \mathbf{\Theta}\mathbf{x}_j] \right)\right)} {\sum_{k \in \mathcal{N}(i) \cup \{ i \}} \exp\left(\mathrm{LeakyReLU}\left(\mathbf{a}^{\top} [\mathbf{\Theta}\mathbf{x}_i \, \Vert \, \mathbf{\Theta}\mathbf{x}_k] \right)\right)}.\]
Parameters:
in_channels int or tuple

Size of each input sample, or -1 to derive the size from the first input(s) to the forward method. A tuple corresponds to the sizes of source and target dimensionalities.

out_channels int

Size of each output sample.

heads int, optional

Number of multi-head-attentions. (default: 1)

concat bool, optional

If set to False, the multi-head attentions are averaged instead of concatenated. (default: True)

negative_slope float, optional

LeakyReLU angle of the negative slope. (default: 0.2)

dropout float, optional

Dropout probability of the normalized attention coefficients which exposes each node to a stochastically sampled neighborhood during training. (default: 0)

add_self_loops bool, optional

If set to False, will not add self-loops to the input graph. (default: True)

bias bool, optional

If set to False, the layer will not learn an additive bias. (default: True)

**kwargs optional

Additional arguments of torch_geometric.nn.conv.MessagePassing.

_alpha: torch_geometric.typing.OptTensor[source]
in_channels[source]
out_channels[source]
heads = 1[source]
concat = True[source]
negative_slope = 0.2[source]
dropout = 0.0[source]
add_self_loops = True[source]
lin_src[source]
lin_dst[source]
att_src[source]
att_dst[source]
attentions = None[source]
forward(x: torch.Tensor | torch_geometric.typing.OptPairTensor, edge_index: torch_geometric.typing.Adj, size: torch_geometric.typing.Size = None, return_attention_weights=None, attention=True, tied_attention=None)[source]
Parameters:
return_attention_weights bool, optional

If set to True, will additionally return the tuple (edge_index, attention_weights), holding the computed attention weights for each edge. (default: None)

message(x_j: torch.Tensor, alpha_j: torch.Tensor, alpha_i: torch_geometric.typing.OptTensor, index: torch.Tensor, ptr: torch_geometric.typing.OptTensor, size_i: int | None) torch.Tensor[source]
__repr__()[source]