In order to optimize backpropagation when training generative models on limited hardware, refer to the following:
- Gradient Checkpointing: This technique helps reduce memory requirements during the backpropagation phase of training results. 
 
- Mixed Precision Training: This technique results in considerable training speedups, reduces memory footprint, and increases performance during model training and evaluation. 
 
- Model pruning: Technique used in deep learning to reduce the size of a model by eliminating unnecessary parameters. 
 
- Gradient Accumulation: This technique is used when training neural networks to support large batch sizes, given the limited available GPU memory.
 
- Distributed Training: This technique is used to divide the workload across multiple processors while training a huge deep-learning model.
 
These will help in successful optimization when training generative models on limited hardware.
Related Post: