spateo.external.STAGATE_pyG.gat_conv ==================================== .. py:module:: spateo.external.STAGATE_pyG.gat_conv Classes ------- .. autoapisummary:: spateo.external.STAGATE_pyG.gat_conv.GATConv Module Contents --------------- .. py:class:: GATConv(in_channels: Union[int, Tuple[int, int]], out_channels: int, heads: int = 1, concat: bool = True, negative_slope: float = 0.2, dropout: float = 0.0, add_self_loops: bool = True, bias: bool = True, **kwargs) Bases: :py:obj:`torch_geometric.nn.conv.MessagePassing` The graph attentional operator from the `"Graph Attention Networks" `_ paper .. math:: \mathbf{x}^{\prime}_i = \alpha_{i,i}\mathbf{\Theta}\mathbf{x}_{i} + \sum_{j \in \mathcal{N}(i)} \alpha_{i,j}\mathbf{\Theta}\mathbf{x}_{j}, where the attention coefficients :math:`\alpha_{i,j}` are computed as .. math:: \alpha_{i,j} = \frac{ \exp\left(\mathrm{LeakyReLU}\left(\mathbf{a}^{\top} [\mathbf{\Theta}\mathbf{x}_i \, \Vert \, \mathbf{\Theta}\mathbf{x}_j] \right)\right)} {\sum_{k \in \mathcal{N}(i) \cup \{ i \}} \exp\left(\mathrm{LeakyReLU}\left(\mathbf{a}^{\top} [\mathbf{\Theta}\mathbf{x}_i \, \Vert \, \mathbf{\Theta}\mathbf{x}_k] \right)\right)}. :param in_channels: Size of each input sample, or :obj:`-1` to derive the size from the first input(s) to the forward method. A tuple corresponds to the sizes of source and target dimensionalities. :type in_channels: int or tuple :param out_channels: Size of each output sample. :type out_channels: int :param heads: Number of multi-head-attentions. (default: :obj:`1`) :type heads: int, optional :param concat: If set to :obj:`False`, the multi-head attentions are averaged instead of concatenated. (default: :obj:`True`) :type concat: bool, optional :param negative_slope: LeakyReLU angle of the negative slope. (default: :obj:`0.2`) :type negative_slope: float, optional :param dropout: Dropout probability of the normalized attention coefficients which exposes each node to a stochastically sampled neighborhood during training. (default: :obj:`0`) :type dropout: float, optional :param add_self_loops: If set to :obj:`False`, will not add self-loops to the input graph. (default: :obj:`True`) :type add_self_loops: bool, optional :param bias: If set to :obj:`False`, the layer will not learn an additive bias. (default: :obj:`True`) :type bias: bool, optional :param \*\*kwargs: Additional arguments of :class:`torch_geometric.nn.conv.MessagePassing`. :type \*\*kwargs: optional .. py:attribute:: _alpha :type: torch_geometric.typing.OptTensor .. py:attribute:: in_channels .. py:attribute:: out_channels .. py:attribute:: heads :value: 1 .. py:attribute:: concat :value: True .. py:attribute:: negative_slope :value: 0.2 .. py:attribute:: dropout :value: 0.0 .. py:attribute:: add_self_loops :value: True .. py:attribute:: lin_src .. py:attribute:: lin_dst .. py:attribute:: att_src .. py:attribute:: att_dst .. py:attribute:: attentions :value: None .. py:method:: forward(x: Union[torch.Tensor, torch_geometric.typing.OptPairTensor], edge_index: torch_geometric.typing.Adj, size: torch_geometric.typing.Size = None, return_attention_weights=None, attention=True, tied_attention=None) :param return_attention_weights: If set to :obj:`True`, will additionally return the tuple :obj:`(edge_index, attention_weights)`, holding the computed attention weights for each edge. (default: :obj:`None`) :type return_attention_weights: bool, optional .. py:method:: message(x_j: torch.Tensor, alpha_j: torch.Tensor, alpha_i: torch_geometric.typing.OptTensor, index: torch.Tensor, ptr: torch_geometric.typing.OptTensor, size_i: Optional[int]) -> torch.Tensor .. py:method:: __repr__()