Bug description
In chapter 5, the book trains a toy GPT for 10 steps and produces a loss curve plot. In the loss curve, the train loss goes down while the validation loss stays flat. However, the code provided generates very different plots every time. Is this expected? I tried several ways to fix the random to no avail. Any suggestions to make the training loop produce the desired loss curves every time?
What operating system are you using?
macOS
Where do you run your code?
Local (laptop, desktop)
Environment
Bug description
In chapter 5, the book trains a toy GPT for 10 steps and produces a loss curve plot. In the loss curve, the train loss goes down while the validation loss stays flat. However, the code provided generates very different plots every time. Is this expected? I tried several ways to fix the random to no avail. Any suggestions to make the training loop produce the desired loss curves every time?
What operating system are you using?
macOS
Where do you run your code?
Local (laptop, desktop)
Environment