Introduction
Reinforcement learning from Human Feedback (RLHF) is a technique used to incorporate human information into AI systems. RLHF emerged primarily as a method to solve hard to specify problems. Its early applications were often in control problems and other traditional domains for reinforcement learning (RL). RLHF became most known through the release of ChatGPT and the subsequent rapid development of large language models (LLMs) and other foundation models.
The basic pipeline for RLHF involves three steps. First, a language model that can follow user preferences must be trained (see Chapter 9). Second, human preference data must be collected for the training of a reward model of human preferences (see Chapter 7). Finally, the language model can be optimized with a RL optimizer of choice, by sampling generations and rating them with respect to the reward model (see Chapter 3 and 11). This book details key decisions and basic implementation examples for each step in this process.
RLHF has been applied to many domains successfully, with complexity increasing as the techniques have matured. Early breakthrough experiments with RLHF were applied to deep reinforcement learning [1], summarization [2], follow instructions [3], parse web information for question answering [4], and “alignment” [5].
Scope of This Book
This book hopes to touch on each of the core steps of doing canonical RLHF implementations. It will not cover all the history of the components nor recent research methods, just techniques, problems, and trade-offs that have been proven to occur again and again.
Chapter Summaries
This book has the following chapters following this Introduction:
Introductions:
- Introduction
- What are preferences?: The philosophy and social sciences behind RLHF.
- Optimization and RL: The problem formulation of RLHF.
- Seminal (Recent) Works: The core works leading to and following ChatGPT.
Problem Setup:
- Definitions: Mathematical reference.
- Preference Data: Gathering human data of preferences.
- Reward Modeling: Modeling human preferences for environment signal.
- Regularization: Numerical tricks to stabilize and guide optimization.
Optimization:
- Instruction Tuning: Fine-tuning models to follow instructions.
- Rejection Sampling: Basic method for using a reward model to filter data.
- Policy Gradients: Core RL methods used to perform RLHF.
- Direct Alignment Algorithms: New PreFT algorithms that do not need RL.
Advanced (TBD):
- Constitutional AI
- Synthetic Data
- Evaluation
Open Questions (TBD):
- Over-optimization
- Style
Target Audience
This book is intended for audiences with entry level experience with language modeling, reinforcement learning, and general machine learning. It will not have exhaustive documentation for all the techniques, but just those crucial to understanding RLHF.
How to Use This Book
This book was largely created because there were no canonical references for important topics in the RLHF workflow. The contributions of this book as supposed to give you the minimum knowledge needed to try a toy implementation or dive into the literature. This is not a comprehensive textbook, but rather a quick book for reminders and getting started. Additionally, given the web-first nature of this book, it is expected that there are minor typos and somewhat random progressions – please contribute by fixing bugs or suggesting important content on GitHub.
About the Author
Dr. Nathan Lambert is a RLHF researcher contributing to the open science of language model fine-tuning. He has released many models trained with RLHF, their subsequent datasets, and training codebases in his time at the Allen Institute for AI (Ai2) and HuggingFace. Examples include Zephyr-Beta, Tulu 2, OLMo, TRL, Open Instruct, and many more. He has written extensively on RLHF, including many blog posts and academic papers.
Future of RLHF
With the investment in language modeling, many variations on the traditional RLHF methods emerged. RLHF colloquially has become synonymous with multiple overlapping approaches. RLHF is a subset of preference fine-tuning (PreFT) techniques, including Direct Alignment Algorithms (See Chapter 12). RLHF is the tool most associated with rapid progress in “post-training” of language models, which encompasses all training after the large-scale autoregressive training on primarily web data. This textbook is a broad overview of RLHF and its directly neighboring methods, such as instruction tuning and other implementation details needed to set up a model for RLHF training.
As more successes of fine-tuning language models with RL emerge, such as OpenAI’s o1 reasoning models, RLHF will be seen as the bridge that enabled further investment of RL methods for fine-tuning large base models.