# Sinkhorn¶

class src.lap_solvers.sinkhorn.Sinkhorn(max_iter: int = 10, tau: float = 1.0, epsilon: float = 0.0001, log_forward: bool = True, batched_operation: bool = False)[source]

Sinkhorn algorithm turns the input matrix into a bi-stochastic matrix.

Sinkhorn algorithm firstly applies an exp function with temperature $$\tau$$:

$\mathbf{S}_{i,j} = \exp \left(\frac{\mathbf{s}_{i,j}}{\tau}\right)$

And then turns the matrix into doubly-stochastic matrix by iterative row- and column-wise normalization:

$\begin{split}\mathbf{S} &= \mathbf{S} \oslash (\mathbf{1}_{n_2} \mathbf{1}_{n_2}^\top \cdot \mathbf{S}) \\ \mathbf{S} &= \mathbf{S} \oslash (\mathbf{S} \cdot \mathbf{1}_{n_2} \mathbf{1}_{n_2}^\top)\end{split}$

where $$\oslash$$ means element-wise division, $$\mathbf{1}_n$$ means a column-vector with length $$n$$ whose elements are all $$1$$s.

Parameters
• max_iter – maximum iterations (default: 10)

• tau – the hyper parameter $$\tau$$ controlling the temperature (default: 1)

• epsilon – a small number for numerical stability (default: 1e-4)

• log_forward – apply log-scale computation for better numerical stability (default: True)

• batched_operation – apply batched_operation for better efficiency (but may cause issues for back-propagation, default: False)

Note

tau is an important hyper parameter to be set for Sinkhorn algorithm. tau controls the distance between the predicted doubly-stochastic matrix, and the discrete permutation matrix computed by Hungarian algorithm (see hungarian()). Given a small tau, Sinkhorn performs more closely to Hungarian, at the cost of slower convergence speed and reduced numerical stability.

Note

We recommend setting log_forward=True because it is more numerically stable. It provides more precise gradient in back propagation and helps the model to converge better and faster.

Warning

If you set log_forward=False, this function behaves a little bit differently: it does not include the exp part.

Note

Setting batched_operation=True may be preferred when you are doing inference with this module and do not need the gradient.

forward(s: torch.Tensor, nrows: Optional[torch.Tensor] = None, ncols: Optional[torch.Tensor] = None, dummy_row: bool = False) torch.Tensor[source]
Parameters
• s$$(b\times n_1 \times n_2)$$ input 3d tensor. $$b$$: batch size

• nrows$$(b)$$ number of objects in dim1

• ncols$$(b)$$ number of objects in dim2

• dummy_row – whether to add dummy rows (rows whose elements are all 0) to pad the matrix to square matrix. default: False

Returns

$$(b\times n_1 \times n_2)$$ the computed doubly-stochastic matrix

Note

We support batched instances with different number of nodes, therefore nrows and ncols are required to specify the exact number of objects of each dimension in the batch. If not specified, we assume the batched matrices are not padded.

Note

The original Sinkhorn algorithm only works for square matrices. To handle cases where the graphs to be matched have different number of nodes, it is a common practice to add dummy rows to construct a square matrix. After the row and column normalizations, the padded rows are discarded.

Note

We assume row number <= column number. If not, the input matrix will be transposed.

forward_log(s, nrows=None, ncols=None, dummy_row=False)[source]

Compute sinkhorn with row/column normalization in the log space.

forward_ori(s, nrows=None, ncols=None, dummy_row=False)[source]

Computing sinkhorn with row/column normalization.

Warning

This function is deprecated because forward_log() is more numerically stable.

training: bool