This document discusses intelligent agents and their environments. It defines an agent as anything that perceives its environment and acts upon it. Rational agents are those that select actions expected to maximize their performance based on perceptions. An agent's task environment consists of its performance measure, environment, actuators, and sensors. Environment types include fully/partially observable, deterministic/stochastic, and single/multi-agent. Four basic agent types are described: simple reflex agents, model-based reflex agents, goal-based agents, and utility-based agents. Learning agents use feedback to improve performance over time. The document provides examples of agents and discusses their design considerations based on task environment properties.