src.evaluation_metric.matching_accuracy

src.evaluation_metric.matching_accuracy(pmat_pred: torch.Tensor, pmat_gt: torch.Tensor, ns: torch.Tensor, idx: int) torch.Tensor[source]

Matching Accuracy between predicted permutation matrix and ground truth permutation matrix.

\[\text{matching recall} = \frac{tr(\mathbf{X}\cdot {\mathbf{X}^{gt}}^\top)}{\sum \mathbf{X}^{gt}}\]

This function is a wrapper of matching_recall.

Parameters
  • pmat_pred\((b\times n_1 \times n_2)\) predicted permutation matrix \((\mathbf{X})\)

  • pmat_gt\((b\times n_1 \times n_2)\) ground truth permutation matrix \((\mathbf{X}^{gt})\)

  • ns\((b\times g)\) number of nodes in graphs, where \(g=2\) for 2GM, and \(g>2\) for MGM. We support batched instances with different number of nodes, and ns is required to specify the exact number of nodes of each instance in the batch.

  • idx\((int)\) index of source graph in the graph pair.

Returns

\((b)\) matching accuracy

Note

If the graph matching problem has no outliers, it is proper to use this metric and papers call it “matching accuracy”. If there are outliers, it is better to use matching_precision and matching_recall.