Understanding Policy Gradients: A Simple RL Example
In reinforcement learning (RL), policy gradient algorithms are a powerful way to train agents to make optimal decisions. These algorithms optimize a policy by computing gradients of expected rewards, often using the advantage formulation. In this blog, we’ll walk through a concrete example to demystify policy gradients, focusing on the gradients of the policy probability and the log-probability , and how they drive policy updates. Let’s dive into a simple RL scenario and see these concepts in action!
Who is this Article For?
- Familiar with Reinforcement Learning Framework.
- Familiar with Policy Gradient Algorithms
- Want to learn about Policy Gradient from scratch by hands-on math.
The Scenario: A Robot in a 1D Grid World
Imagine a robot navigating a 1D grid with three states: , , and . The goal is to reach , the terminal state, which yields a reward of +1. The robot can take two actions in each state:

- Left: Move one state left (e.g., ).
- Right: Move one state right (e.g., ).
Problem Details:
- States: , where is terminal.
- Rewards:
- Reaching : .
- All other transitions: .
- Discount Factor: (future rewards are discounted).
- Policy: A parameterized policy determines the probability of each action.
Our focus will be on state , where we’ll compute the policy gradient and update the policy based on a sampled trajectory.
Steps We Will Follow to Solve this Problem:
- 1. Defining the Policy: We’ll make a rule (policy function) that tells the robot how likely it is to move Left or Right in each spot. We’ll keep tweaking this rule to help the robot make better choices.
- 2. Simulating a Trajectory: We’ll let the robot move from one spot to another (like to to ) and see what steps it takes and what rewards it gets.
- 3. Computing the Return: We’ll total up the rewards the robot gets during its journey, giving more weight to rewards it gets sooner (since later rewards are less certain).
- 4. Estimating the Advantage: We’ll figure out if the robot’s moves were better or worse than average, so we can focus on improving the best ones.
- 5. Computing the Gradients: We’ll look at how much each move changes the robot’s decision rule, so we know how to adjust it.
- 6. Computing the Policy Gradient: We’ll combine the “how good” and “how to improve” info to decide how to update the robot’s decision rule.
- 7. Updating the Policy: We’ll tweak the robot’s decision rule a little to make it more likely to pick good moves next time.
- 8. Try a Different Path: We’ll test what happens if the robot picks a different move (like Left instead of Right) to see how it changes things.
1: Defining the Policy
Let’s define the policy for state using a logistic (sigmoid) function parameterized by a single parameter :
Initially we can set :
The policy is equally likely to choose Left or Right in .
2: Simulating a Trajectory
Let’s simulate a trajectory starting from :
- In , the robot chooses Right (probability 0.5) and moves to . Reward: .
- In , the robot chooses Right again, moving to . Reward: .
- In , the episode ends (terminal state).
Trajectory:
3: Computing the Return
The return is the sum of discounted future rewards from time . For the action in at :
This return represents the total discounted reward for the trajectory.
4: Estimating the Advantage
Policy gradient algorithms often use the advantage formulation:
The advantage measures how much better an action is compared to the average action in state .
: The expected return for taking Right in . From our trajectory, we approximate:
Assume we have an estimate:
Thus:
A positive advantage means Right was better than average in .
5: Computing the Gradients
Now, we compute the gradients and .
a) Gradient of the Policy Probability
The derivative of the sigmoid is:
For :
This gradient shows that increasing increases the probability of Right by 0.25 per unit change in .
b) Gradient of the Log-Probability
Using the chain rule:
6: Computing the Policy Gradient
For our trajectory, we approximate the policy gradient:
This positive gradient indicates we should increase to favor Right.
7: Updating the Policy
Update using gradient ascent:
With learning rate :
New policy:
The probability of Right increases slightly, reflecting its positive advantage.
8: Exploring the Alternative Action
What if the robot chose Left in ? Suppose Left keeps the robot in with , and the episode ends. Then:
-
Return: .
-
Advantage: .
-
Gradients:
-
.
-
.
-
.
-
-
Policy gradient: .
-
Update: , increasing .
This shows the policy learns to avoid Left and favor Right.
Why Use ?
You might wonder why we use instead of . The policy gradient theorem naturally derives:
This is equivalent to using , simplifying computations and aligning with policy distribution sampling.
The log-probability gradient reduces variance and is computationally efficient.
Conclusion
This simple example shows how policy gradients work in practice. In real-world RL, we’d use multiple trajectories, neural network policies, and advanced algorithms like PPO or TRPO, but the core ideas remain the same.