Zhixing Tan,
Oct 12, 2022
:
Multi-Stage Prompting for Making Pre-trained Language Models Better Translators
How can we leverage and improve prompt tuning techniques to faciliate pretrained LMs as a better translator? We answer this question in our ACL 2022 paper.
(Read More)
Xiangzhe Kong,
Oct 1, 2022
:
Molecule Generation by Principal Subgraph Mining and Assembling
We proposed the notion of principal subgraph and designed a theoretically efficient two-step generation framework for the molecule generation task.
(Read More)
Zonghan Yang,
Apr 5, 2022
:
On Robust Prefix-Tuning for Text Classification
Prefix-tuning lacks robustness, while current defense methods will hamper the modularity of the prefixes. We tune an additional prefix during inference to steer correct activation of the pretrained LM, which significantly improves the robustness.
(Read More)
Fuchao Wei,
Mar 20, 2022
:
A Class of Short-term Recurrence Anderson Mixing Methods and Their Applications
We develop a novel class of short-term recurrence Anderson mixing methods and validate its effectiveness in several applications including solving fixed-point problems and training neural networks.
(Read More)