Two-stage pretraining for chemicals:

1. Masked language model
2. Predict chemical properties

@omendezlucio Nicolaou, @bertonearnshaw

https://arxiv.org/abs/2211.0265

Interpolated polynomial multiple zeta values of fixed weight, depth, and height

We define the interpolated polynomial multiple zeta values as a generalization of all of multiple zeta values, multiple zeta-star values, interpolated multiple zeta values, symmetric multiple zeta values, and polynomial multiple zeta values. We then compute the generating function of the sum of interpolated polynomial multiple zeta values of fixed weight, depth, and height.

arXiv.org
Thank you!
---
RT @annakcroft
@KevinKaichuang @omendezlucio @bertonearnshaw Missing a 7!: https://arxiv.org/abs/2211.02657
https://twitter.com/annakcroft/status/1592529791378690049
MolE: a molecular foundation model for drug discovery

Models that accurately predict properties based on chemical structure are valuable tools in drug discovery. However, for many properties, public and private training sets are typically small, and it is difficult for the models to generalize well outside of the training data. Recently, large language models have addressed this problem by using self-supervised pretraining on large unlabeled datasets, followed by fine-tuning on smaller, labeled datasets. In this paper, we report MolE, a molecular foundation model that adapts the DeBERTa architecture to be used on molecular graphs together with a two-step pretraining strategy. The first step of pretraining is a self-supervised approach focused on learning chemical structures, and the second step is a massive multi-task approach to learn biological information. We show that fine-tuning pretrained MolE achieves state-of-the-art results on 9 of the 22 ADMET tasks included in the Therapeutic Data Commons.

arXiv.org
@KevinKaichuang Interesting paper. Looks like your first arxiv link goes to a different paper because it is missing the last digit 7.