class src.gconv.ChannelIndependentConv(in_features: int, out_features: int, in_edges: int, out_edges: Optional[int] = None)[source]

Channel Independent Embedding Convolution. Proposed by “Yu et al. Learning deep graph matching with channel-independent embedding and Hungarian attention. ICLR 2020.”

  • in_features – the dimension of input node features

  • out_features – the dimension of output node features

  • in_edges – the dimension of input edge features

  • out_edges – (optional) the dimension of output edge features. It needs to be the same as out_features

forward(A: torch.Tensor, emb_node: torch.Tensor, emb_edge: torch.Tensor, mode: int = 1) Tuple[torch.Tensor, torch.Tensor][source]
  • A\((b\times n\times n)\) {0,1} adjacency matrix. \(b\): batch size, \(n\): number of nodes

  • emb_node\((b\times n\times d_n)\) input node embedding. \(d_n\): node feature dimension

  • emb_edge\((b\times n\times n\times d_e)\) input edge embedding. \(d_e\): edge feature dimension

  • mode – 1 or 2, refer to the paper for details


\((b\times n\times d^\prime)\) new node embedding, \((b\times n\times n\times d^\prime)\) new edge embedding

training: bool