Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Sequence Tutor: Conservative Fine-Tuning of Sequence Generation Models with KL-control

About

This paper proposes a general method for improving the structure and quality of sequences generated by a recurrent neural network (RNN), while maintaining information originally learned from data, as well as sample diversity. An RNN is first pre-trained on data using maximum likelihood estimation (MLE), and the probability distribution over the next token in the sequence learned by this model is treated as a prior policy. Another RNN is then trained using reinforcement learning (RL) to generate higher-quality outputs that account for domain-specific incentives while retaining proximity to the prior policy of the MLE RNN. To formalize this objective, we derive novel off-policy RL methods for RNNs from KL-control. The effectiveness of the approach is demonstrated on two applications; 1) generating novel musical melodies, and 2) computational molecular generation. For both problems, we show that the proposed method improves the desired properties and structure of the generated sequences, while maintaining information learned from data.

Natasha Jaques, Shixiang Gu, Dzmitry Bahdanau, Jos\'e Miguel Hern\'andez-Lobato, Richard E. Turner, Douglas Eck• 2016

Related benchmarks

TaskDatasetResultRank
Emergent Misalignment MeasurementCode
Misalignment0.00e+0
6
Emergent Misalignment MeasurementMedical General Evaluation
Misalignment2.5
6
Emergent Misalignment MeasurementSecurity General evaluation
Misalignment Score2.92
6
Misaligned Task LearningLegal In-domain
Misalignment7.67
6
Misaligned Task LearningMedical In-domain
Misalignment37.67
6
Misaligned Task LearningSecurity In-domain
Misalignment6.33
6
Emergent Misalignment MeasurementLegal
Misalignment5.83
6
Misaligned Task LearningCode In-domain
Misalignment23.15
6
Showing 8 of 8 rows

Other info

Follow for update