This document presents the SECOND THOUGHTS approach for enabling language models to re-align generated text with human values. It does this by modeling the chain of edits between a value-unaligned source text and a value-aligned target text, fine-tuning a language model on these edits, and further refining the model using reinforcement learning. Experiments show the method improves alignment of generated responses with human values on several benchmark datasets while maintaining coherence. The generated editing steps also provide interpretability to help correct errors. However, the approach is limited by the capabilities of the underlying language model.