This document discusses auto scaling in cloud computing environments. It explains that auto scaling allows resources to scale up or down based on demand to provide an elastic architecture. It then outlines the key steps to implementing auto scaling, including: 1) creating a launch configuration with parameters like the AMI and instance type; 2) creating an auto scaling group to define scaling conditions and load balancers; and 3) creating scaling policies and alarm metrics tied to CloudWatch to determine when to scale instances up or down based on metrics like response time. The document emphasizes that auto scaling is guided by code through tools like Chef and Puppet to provision instances automatically.