src.evaluation_metric.matching_accuracy

src.evaluation_metric.matching_accuracy(pmat_pred: torch.Tensor, pmat_gt: torch.Tensor, ns: torch.Tensor) torch.Tensor[source]

Matching Accuracy between predicted permutation matrix and ground truth permutation matrix.

\[\text{matching recall} = \frac{tr(\mathbf{X}\cdot {\mathbf{X}^{gt}}^\top)}{\sum \mathbf{X}^{gt}}\]

This function is a wrapper of matching_recall.

Parameters
  • pmat_pred\((b\times n_1 \times n_2)\) predicted permutation matrix \((\mathbf{X})\)

  • pmat_gt\((b\times n_1 \times n_2)\) ground truth permutation matrix \((\mathbf{X}^{gt})\)

  • ns\((b)\) number of exact pairs. We support batched instances with different number of nodes, and ns is required to specify the exact number of nodes of each instance in the batch.

Returns

\((b)\) matching accuracy

Note

If the graph matching problem has no outliers, it is proper to use this metric and papers call it “matching accuracy”. If there are outliers, it is better to use matching_precision and matching_recall.