Models

ThinkMatch currently contains pytorch source code of the following deep graph matching methods:

Note

For up-to-date benchmark, please refer to paper-with-code.

GMN

Our implementation of the following paper:

  • Andrei Zanfir and Cristian Sminchisescu. “Deep Learning of Graph Matching.” CVPR 2018.

    [paper]

GMN proposes the first deep graph matching pipeline which is end-to-end trainable via supervised learning and gradient descent. It proposes to combine the following components to formulate the graph matching pipeline:

  • VGG16 CNN to extract image features

  • Delaunay triangulation to build graphs

  • Building affinity matrix efficiently via Factorized Graph Matching (FGM)

  • Solving the resulting Quadratic Assignment Problem (QAP) by Spectral Matching (SM) and Sinkhorn algorithm which are differentiable

  • Supervised learning based on pixel offset loss (known as “Robust loss” in the paper)

Benchmark Results

PascalVOC - 2GM

experiment config: experiments/vgg16_gmn_voc.yaml

pretrained model: google drive

model

year

aero

bike

bird

boat

bottle

bus

car

cat

chair

cow

table

dog

horse

mbkie

person

plant

sheep

sofa

train

tv

mean

GMN

2018

0.4163

0.5964

0.6027

0.4795

0.7918

0.7020

0.6735

0.6488

0.3924

0.6128

0.6693

0.5976

0.6106

0.5975

0.3721

0.7818

0.6800

0.4993

0.8421

0.9141

0.6240

Willow Object Class - 2GM

experiment config: experiments/vgg16_gmn_willow.yaml

pretrained model: google drive

model

year

remark

Car

Duck

Face

Motorbike

Winebottle

mean

GMN

2018

0.6790

0.7670

0.9980

0.6920

0.8310

0.7934

SPair-71k - 2GM

experiment config: experiments/vgg16_gmn_spair71k.yaml

pretrained model: google drive

model

year

aero

bike

bird

boat

bottle

bus

car

cat

chair

cow

dog

horse

mtbike

person

plant

sheep

train

tv

mean

GMN

2018

0.3109

0.3082

0.3772

0.2441

0.3846

0.3538

0.3411

0.2842

0.4074

0.2282

0.2635

0.3089

0.3757

0.2723

0.3358

0.3470

0.2628

0.2930

0.3166

File Organization

├── affinity_layer.py
|   the implementation of affinity layer to compute the affinity matrix for GMN
├── displacement_layer.py
|   the implementation of the displacement layer to compute the pixel offset loss
├── model.py
|   the implementation of training/evaluation procedures of GMN
├── model_config.py
|   the declaration of model hyperparameters
└── voting_layer.py
    the implementation of voting layer to compute the row-stotatic matrix with softmax

Credits and Citation

Please cite the following paper if you use this model in your research:

@inproceedings{ZanfirCVPR18,
  author = {A. Zanfir and C. Sminchisescu},
  title = {Deep Learning of Graph Matching},
  booktitle = {IEEE Conference on Computer Vision and Pattern Recognition},
  pages={2684--2693},
  year={2018}
}

PCA-GM

Our implementation of the following papers:

  • Runzhong Wang, Junchi Yan and Xiaokang Yang. “Combinatorial Learning of Robust Deep Graph Matching: an Embedding based Approach.” TPAMI 2020.

    [paper], [project page]

  • Runzhong Wang, Junchi Yan and Xiaokang Yang. “Learning Combinatorial Embedding Networks for Deep Graph Matching.” ICCV 2019. [paper]

PCA-GM proposes the first deep graph matching network based on graph embedding, and it is composed of the following components:

  • VGG16 CNN to extract image features

  • Delaunay triangulation to build graphs

  • Graph Convolutional Network (GCN) with cross-graph convolution to embed graph structure features

  • Solving the resulting Linear Assignment Problem by Sinkhorn network

  • Supervised learning based on cross entropy loss (known as “permutation loss” in this paper)

Such a CNN-GNN-Sinkhorn-CrossEntropy framework has be adopted by many following papers.

PCA-GM is proposed in the conference version. In the journal version, we propose IPCA-GM, whereby the cross-graph convolution step is implemented in an iterative manner. The motivation of the iterative update scheme is that: better embedding features will lead to better cross-graph weight matrix and vice versa.

Benchmark Results

PascalVOC - 2GM

  • PCA-GM

    experiment config: experiments/vgg16_pca_voc.yaml

    pretrained model: google drive

  • IPCA-GM

    experiment config: experiments/vgg16_ipca_voc.yaml

    pretrained model: google drive

model

year

aero

bike

bird

boat

bottle

bus

car

cat

chair

cow

table

dog

horse

mbkie

person

plant

sheep

sofa

train

tv

mean

PCA-GM

2019

0.4979

0.6193

0.6531

0.5715

0.7882

0.7556

0.6466

0.6969

0.4164

0.6339

0.5073

0.6705

0.6671

0.6164

0.4447

0.8116

0.6782

0.5922

0.7845

0.9042

0.6478

IPCA-GM

2020

0.5378

0.6622

0.6714

0.6120

0.8039

0.7527

0.7255

0.7252

0.4455

0.6524

0.5430

0.6724

0.6790

0.6421

0.4793

0.8435

0.7079

0.6398

0.8380

0.9083

0.6770

Willow Object Class - 2GM

  • PCA-GM

    experiment config: experiments/vgg16_pca_willow.yaml

    pretrained model: google drive

  • IPCA-GM

    experiment config: experiments/vgg16_ipca_willow.yaml

    pretrained model: google drive

model

year

remark

Car

Duck

Face

Motorbike

Winebottle

mean

PCA-GM

2019

0.8760

0.8360

1.0000

0.7760

0.8840

0.8744

IPCA-GM

2020

0.9040

0.8860

1.0000

0.8300

0.8830

0.9006

SPair-71k - 2GM

  • PCA-GM

    experiment config: experiments/vgg16_pca_spair71k.yaml

    pretrained model: google drive

  • IPCA-GM

    experiment config: experiments/vgg16_ipca_spair71k.yaml

    pretrained model: google drive

model

year

aero

bike

bird

boat

bottle

bus

car

cat

chair

cow

dog

horse

mtbike

person

plant

sheep

train

tv

mean

PCA-GM

2019

0.3001

0.3798

0.4518

0.2638

0.3731

0.4448

0.3659

0.3048

0.3786

0.3098

0.3010

0.4072

0.4574

0.3231

0.3572

0.3488

0.2132

0.5822

0.3646

IPCA-GM

2020

0.3597

0.3401

0.4184

0.2810

0.3821

0.4351

0.3649

0.3211

0.3762

0.2952

0.3425

0.4189

0.4639

0.3472

0.3750

0.3716

0.2571

0.5788

0.3738

File Organization

├── affinity_layer.py
|   the implementation of affinity layer to compute the affinity matrix for PCA-GM and IPCA-GM
├── model.py
|   the implementation of training/evaluation procedures of PCA-GM and IPCA-GM
└── model_config.py
    the declaration of model hyperparameters

Credits and Citation

Please cite the following paper if you use this model in your research:

@inproceedings{WangICCV19,
  author = {Wang, Runzhong and Yan, Junchi and Yang, Xiaokang},
  title = {Learning Combinatorial Embedding Networks for Deep Graph Matching},
  booktitle = {IEEE International Conference on Computer Vision},
  pages={3056--3065},
  year = {2019}
}

CIE-H

Our implementation of the following paper:

  • Tianshu Yu, Runzhong Wang, Junchi Yan, Baoxin Li. “Learning deep graph matching with channel-independent embedding and Hungarian attention.” ICLR 2020.

    [paper]

CIE-H follows the CNN-GNN-metric-Sinkhorn pipeline proposed by PCA-GM, and it improves PCA-GM from two aspects: 1) A channel-independent edge embedding module for better graph feature extraction; 2) A Hungarian Attention module that dynamically constructs a structured and sparsely connected layer, taking into account the most contributing matching pairs as hard attention during training.

Benchmark Results

PascalVOC - 2GM

experiment config: experiments/vgg16_cie_voc.yaml

pretrained model: google drive

model

year

aero

bike

bird

boat

bottle

bus

car

cat

chair

cow

table

dog

horse

mbkie

person

plant

sheep

sofa

train

tv

mean

CIE-H

2020

0.5250

0.6858

0.7015

0.5706

0.8207

0.7700

0.7073

0.7313

0.4383

0.6994

0.6237

0.7018

0.7031

0.6641

0.4763

0.8525

0.7172

0.6400

0.8385

0.9168

0.6892

Willow Object Class - 2GM

experiment config: experiments/vgg16_cie_willow.yaml

pretrained model: google drive

model

year

remark

Car

Duck

Face

Motorbike

Winebottle

mean

CIE-H

2020

0.8581

0.8206

0.9994

0.8836

0.8871

0.8898

SPair-71k - 2GM

experiment config: experiments/vgg16_cie_spair71k.yaml

pretrained model: google drive

model

year

aero

bike

bird

boat

bottle

bus

car

cat

chair

cow

dog

horse

mtbike

person

plant

sheep

train

tv

mean

CIE-H

2020

0.4284

0.4016

0.5820

0.3753

0.4514

0.4997

0.4395

0.3980

0.4435

0.3588

0.4301

0.5219

0.4714

0.3930

0.4167

0.4387

0.3806

0.7431

0.4541

File Organization

├── model.py
|   the implementation of training/evaluation procedures of BBGM
└── model_config.py
    the declaration of model hyperparameters

some files are borrowed from models/PCA

Credits and Citation

Please cite the following paper if you use this model in your research:

@inproceedings{YuICLR20,
  title={Learning deep graph matching with channel-independent embedding and Hungarian attention},
  author={Yu, Tianshu and Wang, Runzhong and Yan, Junchi and Li, Baoxin},
  booktitle={International Conference on Learning Representations},
  year={2020}
}

NGM

Our implementation of the following paper:

  • Runzhong Wang, Junchi Yan, Xiaokang Yang. “Neural Graph Matching Network: Learning Lawler’s Quadratic Assignment Problem with Extension to Hypergraph and Multiple-graph Matching.” TPAMI 2021. [paper], [project page]

NGM proposes a learnable solver to the Lawler’s Quadratic Assignment Problem (Lawler’s QAP), which is the most general form of graph matching. The learning of Lawler’s QAP is achieved by firstly transforming the Lawler’s QAP to the so-called association graph, and solving Lawler’s QAP can be equivalently transformed into a vertex classification problem on the association graph. Finally, Graph Neural Networks are adopted to solve vertex classification problem as a common technique.

NGM has the following components:

  • (If images as the input) VGG16 CNN to extract image features

  • (If images as the input) Delaunay triangulation to build graphs

  • (If images as the input) Building affinity matrix efficiently via Factorized Graph Matching (FGM)

  • Solving the resulting Quadratic Assignment Problem (QAP) by the proposed Graph Neural Network solver

  • Supervised learning based on cross-entropy (also known as “permutation loss”)

We also show that such a general scheme can be extended to hyper-graph matching (by building a so-called association hypergraph) and multi-graph matching (by adding a spectral multi-matching head).

The first version of NGM/NHGM/NMGM is developed in 2019 with VGG16 CNN in line with GMN. We further develop an improved version NGM-v2/NHGM-v2/NMGM-v2 in 2021 by simply modifying the VGG16 to the improved backbone developed by BBGM.

To summarize, here we include the following models:

  • the original models developed in 2019:

    • NGM: two-graph matching/Lawler’s QAP solver with VGG16

    • NHGM: hyper-graph matching solver with VGG16

    • NMGM: multi-graph matching solver with VGG16

  • the improved v2 models developed in 2021:

    • NGM-v2: two-graph matching/Lawler’s QAP solver with VGG16+SplineConv

    • NHGM-v2: hyper-graph matching solver with VGG16+SplineConv

    • NMGM-v2: multi-graph matching solver with VGG16+SplineConv

Benchmark Results

PascalVOC - 2GM

  • NGM

    experiment config: experiments/vgg16_ngm_voc.yaml

    pretrained model: google drive

  • NHGM

    experiment config: experiments/vgg16_nhgm_voc.yaml

    pretrained model: google drive

  • NGM-v2

    experiment config: experiments/vgg16_ngmv2_voc.yaml

    pretrained model: google drive

  • NHGM-v2

    experiment config: experiments/vgg16_nhgmv2_voc.yaml

    pretrained model: google drive

Since NMGM does not support partial matching, we do not evaluate NMGM on Pascal VOC.

model

year

aero

bike

bird

boat

bottle

bus

car

cat

chair

cow

table

dog

horse

mbkie

person

plant

sheep

sofa

train

tv

mean

NGM

2019

0.5010

0.6350

0.5790

0.5340

0.7980

0.7710

0.7360

0.6820

0.4110

0.6640

0.4080

0.6030

0.6190

0.6350

0.4560

0.7710

0.6930

0.6550

0.7920

0.8820

0.6413

NHGM

2019

0.5240

0.6220

0.5830

0.5570

0.7870

0.7770

0.7440

0.7070

0.4200

0.6460

0.5380

0.6100

0.6190

0.6080

0.4680

0.7910

0.6680

0.5510

0.8090

0.8870

0.6458

NGM-v2

2021

0.6184

0.7118

0.7762

0.7875

0.8733

0.9363

0.8770

0.7977

0.5535

0.7781

0.8952

0.7880

0.8011

0.7923

0.6258

0.9771

0.7769

0.7574

0.9665

0.9323

0.8011

NHGM-v2

2021

0.5995

0.7154

0.7724

0.7902

0.8773

0.9457

0.8903

0.8181

0.5995

0.8129

0.8695

0.7811

0.7645

0.7750

0.6440

0.9872

0.7778

0.7538

0.9787

0.9280

0.8040

Willow Object Class - 2GM & MGM

  • NGM

    experiment config: experiments/vgg16_ngm_willow.yaml

    pretrained model: google drive

  • NHGM

    experiment config: experiments/vgg16_nhgm_willow.yaml

    pretrained model: google drive

  • NMGM

    experiment config: experiments/vgg16_nmgm_willow.yaml

    pretrained model: google drive

  • NGM-v2

    experiment config: experiments/vgg16_ngmv2_willow.yaml

    pretrained model: google drive

  • NHGM-v2

    experiment config: experiments/vgg16_nhgmv2_willow.yaml

    pretrained model: google drive

  • NMGM-v2

    experiment config: experiments/vgg16_nmgmv2_willow.yaml

    pretrained model: google drive

model

year

remark

Car

Duck

Face

Motorbike

Winebottle

mean

NGM

2019

0.8420

0.7760

0.9940

0.7680

0.8830

0.8530

NHGM

2019

0.8650

0.7220

0.9990

0.7930

0.8940

0.8550

NMGM

2019

0.7850

0.9210

1.0000

0.7870

0.9480

0.8880

NGM-v2

2021

0.9740

0.9340

1.0000

0.9860

0.9830

0.9754

NHGM-v2

2021

0.9740

0.9390

1.0000

0.9860

0.9890

0.9780

NMGM-v2

2021

0.9760

0.9447

1.0000

1.0000

0.9902

0.9822

SPair-71k - 2GM

  • NGM

    experiment config: experiments/vgg16_ngm_spair71k.yaml

    pretrained model: google drive

  • NHGM

    experiment config: experiments/vgg16_nhgm_spair71k.yaml

    pretrained model: google drive

  • NGM-v2

    experiment config: experiments/vgg16_ngmv2_spair71k.yaml

    pretrained model: google drive

  • NHGM-v2

    experiment config: experiments/vgg16_nhgmv2_spair71k.yaml

    pretrained model: google drive

model

year

aero

bike

bird

boat

bottle

bus

car

cat

chair

cow

dog

horse

mtbike

person

plant

sheep

train

tv

mean

NGM

2019

0.3569

0.3477

0.4386

0.2964

0.4603

0.4895

0.3792

0.2697

0.4639

0.3398

0.2811

0.3835

0.4402

0.3584

0.4725

0.3564

0.3311

0.5892

0.3919

NHGM

2019

0.2841

0.3115

0.4310

0.2827

0.3578

0.4558

0.3659

0.3024

0.5117

0.3334

0.3084

0.3836

0.4231

0.3440

0.4594

0.4020

0.3203

0.4251

0.3723

NGM-v2

2021

0.5717

0.5095

0.8536

0.7214

0.7030

0.9167

0.7754

0.7263

0.6432

0.7232

0.6101

0.6743

0.6807

0.6554

0.9891

0.7820

0.9439

0.9728

0.7473

NMGM-v2

2021

0.5354

0.4780

0.7795

0.6473

0.6715

0.8428

0.6763

0.6606

0.6363

0.6939

0.5624

0.6460

0.6939

0.6332

0.9726

0.7462

0.8741

0.9836

0.7074

Run QAPLIB Experiments

Our proposed NGM is capable of handling pure combinatorial optimization problems from QAPLIB. We train one model on the problems with the same prefix (which means that they represent the same category of problems), and we set the objective function as the loss, resulting in a self-supervised learning scheme. To further elevate the performance on QAPLIB, we adopt randomly sampling based on Gumbel Sinkhorn.

About the Performance

The quality of solutions of our method is comparative and even better compared with the classic learning-free optimization method Sinkhorn-JA, and the inference time is significantly reduced. For more details, please visit the project page.

Only the NGM Model is Considered for QAPLIB

Since NGM-v2 only improves in image feature extractor which is not relevant to QAPLIB problems, we only test NGM on QAPLIB. Besides, the QAPLIB instances are equivalent to the formulation of two-graph matching problems, thus the hyper-graph and multi-graph variants are not considered for QAPLIB.

How to Run

For experiments on QAPLIB, run

python train_eval_qap.py --config experiments/ngm_qaplib.yaml

and the QAPLIB dataset shall be automatically downloaded. By default, training is run on the first category (bur) and testing is run on all datasets. You may modify the TRAIN.CLASS parameter for different problems.

NOTICE: you may need a large GPU memory (up to 48GB in our experiment) to run NGM on QAPLIB instances up to 150. Adjust the two parameters MAX_TRAIN_SIZE, MAX_TEST_SIZE accordingly to fit the size of your GPU memory.

File Organization

├── geo_edge_feature.py
|   the implementation of used geometric features
├── gnn.py
|   the implementation of used graph neural networks
├── hypermodel.py
|   the implementation of training/evaluation procedures of NHGM
├── hypermodel_v2.py
|   the implementation of training/evaluation procedures of NHGMv2
├── mgmmodel.py
|   the implementation of training/evaluation procedures of NMGM
├── model.py
|   the implementation of training/evaluation procedures of NGM (can handle both graph matching and QAP)
├── model_config.py
|   the declaration of model hyperparameters
└── model_v2.py
    the implementation of training/evaluation procedures of NGMv2 (can only handle graph matching)

some layers are borrowed from models/BBGM

Credits and Citation

Please cite the following paper if you use this model in your research:

@article{WangPAMI21,
  author = {Wang, Runzhong and Yan, Junchi and Yang, Xiaokang},
  title = {Neural Graph Matching Network: Learning Lawler's Quadratic Assignment Problem with Extension to Hypergraph and Multiple-graph Matching},
  journal = {IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year={2021}
}

GANN

Our implementation of the following paper:

  • Runzhong Wang, Junchi Yan and Xiaokang Yang. “Graduated Assignment for Joint Multi-Graph Matching and Clustering with Application to Unsupervised Graph Matching Network Learning.” NeurIPS 2020. [paper]

  • Runzhong Wang, Shaofei Jiang, Junchi Yan and Xiaokang Yang. “Robust Self-supervised Learning of Deep Graph Matching with Mixture of Modes.” Submitted to TPAMI. [project page]

GANN proposes a self-supervised learning framework by leveraging graph matching solvers to provide pseudo labels to train the neural network module in deep graph matching pipeline. We propose a general graph matching solver for various graph matching settings based on the classic Graduated Assignment (GA) algorithm.

The variants on three different graph matching settings are denoted by different suffixes:

  • GANN-2GM: self-supervised learning graduated assignment neural network for two-grpah matching

  • GANN-MGM: self-supervised learning graduated assignment neural network for multi-grpah matching

  • GANN-MGM3: self-supervised learning graduated assignment neural network for multi-graph matching with a mixture of modes (this setting is also known as multi-graph matching and clustering in the NeurIPS paper)

GANN-MGM notably surpass supervised learning methods on the relatively small dataset Willow Object Class.

Benchmark Results

Willow Object Class - MGM

experiment config: experiments/vgg16_gann-mgm_willow.yaml

pretrained model: google drive

model

year

remark

Car

Duck

Face

Motorbike

Winebottle

mean

GANN-MGM

2020

self-supervised

0.9600

0.9642

1.0000

1.0000

0.9879

0.9906

File Organization

├── graduated_assignment.py
|   the implementation of the graduated assignment algorithm covering all scenarios
├── model.py
|   the implementation of training/evaluation procedures of GANN-GM/MGM/MGM3
└── model_config.py
    the declaration of model hyperparameters

Credits and Citation

Please cite the following papers if you use this model in your research:

@inproceedings{WangNeurIPS20,
  author = {Runzhong Wang and Junchi Yan and Xiaokang Yang},
  title = {Graduated Assignment for Joint Multi-Graph Matching and Clustering with Application to Unsupervised Graph Matching Network Learning},
  booktitle = {Neural Information Processing Systems},
  year = {2020}
}

@unpublished{WangPAMIsub21,
  title={Robust Self-supervised Learning of Deep Graph Matching with Mixture of Modes},
  author={Wang, Runzhong and Jiang, Shaofei and Yan, Junchi and Yang, Xiaokang},
  note={submitted to IEEE Transactions of Pattern Analysis and Machine Intelligence},
  year={2021}
}

BBGM

Our implementation of the following paper:

  • Michal Rolínek, Paul Swoboda, Dominik Zietlow, Anselm Paulus, Vít Musil, Georg Martius. “Deep Graph Matching via Blackbox Differentiation of Combinatorial Solvers.” ECCV 2020.

    [paper]

BBGM proposes a new feature extractor by using the global feature of VGG16 and a Spline Convolution module, and such an improved backbone is found effective for image matching problems. NOTE: Spline convolution is officially named as SplineCNN, however, since the term “CNN” is conventionally used for image feature extractors, and SplineCNN works on graphs, here we name it as “Spline Convolution” for disambiguation.

The resulting quadratic assignment problem is solved by a discrete LPMP solver, and the gradient is approximated by the black-box combinatorial solver technique.

Benchmark Results

PascalVOC - 2GM

experiment config: experiments/vgg16_bbgm_voc.yaml

pretrained model: google drive

model

year

aero

bike

bird

boat

bottle

bus

car

cat

chair

cow

table

dog

horse

mbkie

person

plant

sheep

sofa

train

tv

mean

BBGM

2020

0.6187

0.7106

0.7969

0.7896

0.8740

0.9401

0.8947

0.8022

0.5676

0.7914

0.6458

0.7892

0.7615

0.7512

0.6519

0.9818

0.7729

0.7701

0.9494

0.9393

0.7899

Willow Object Class - 2GM & MGM

experiment config: experiments/vgg16_bbgm_willow.yaml

pretrained model: google drive

model

year

remark

Car

Duck

Face

Motorbike

Winebottle

mean

BBGM

2020

0.9680

0.8990

1.0000

0.9980

0.9940

0.9718

SPair-71k - 2GM

experiment config: experiments/vgg16_bbgm_spair71k.yaml

pretrained model: google drive

model

year

aero

bike

bird

boat

bottle

bus

car

cat

chair

cow

dog

horse

mtbike

person

plant

sheep

train

tv

mean

BBGM

2020

0.5715

0.5405

0.8500

0.7162

0.6942

0.9095

0.7826

0.7223

0.6693

0.7151

0.6131

0.6944

0.7264

0.7221

0.9865

0.7926

0.9358

0.9919

0.7574

File Organization

├── affinity_layer.py
|   the implementation of the inner-product with weight affinity layer proposed by BBGM
├── model.py
|   the implementation of training/evaluation procedures of BBGM
├── model_config.py
|   the declaration of model hyperparameters
└── sconv_archs.py
    the implementation of spline convolution (SpilneCNN) operations

Remarks

It is worth noting that our reproduced result of BBGM is different from the result from their paper. By looking into the code released by BBGM, we find that the authors of BBGM filter out keypoints which are out of the bounding box but we do not. Therefore, the number of nodes of graphs in BBGM is smaller than ours, and the graph matching problem is less challenging than ours. In this repository, we modify BBGM to fit into our setting, and we report our reproduced result for fair comparison.

Credits and Citation

This code is developed based on the official implementation of BBGM. The code is modified to fit our general framework.

Please cite the following paper if you use this model in your research:

@inproceedings{RolinekECCV20,
  title={Deep Graph Matching via Blackbox Differentiation of Combinatorial Solvers},
  author={Rol{\'\i}nek, Michal and Swoboda, Paul and Zietlow, Dominik and Paulus, Anselm and Musil, V{\'\i}t and Martius, Georg},
  booktitle={European Conference on Computer Vision},
  pages={407--424},
  year={2020},
  organization={Springer}
}