表征学习representation learning

我在尝试复现一个论文的结果。现在需要做的是在一个有两个view的数据上做多视角预测的表征学习。害怕翻译不标准,英文是:Our final experiments use speech data from XRMB, consisting of simultaneously recorded acoustic and articulatory measurements. Our task on this data set was representation learning for multi-view prediction – that is, using both views of data to learn a shared discriminative rep- resentation.

论文是这个https://arxiv.org/pdf/1907.07739v1.pdf

现在需要做的是论文的4.3部分

GitHub是这个https://github.com/hdcouture/TOCCA

希望能有人可以教一下我。谢谢!

主要是太花时间了,我自己弄都要好长时间