Hadoop 2.0 architecture uses a scale-out storage and distributed processing framework. It stores large datasets across commodity hardware clusters and allows for processing using a simple programming model. The architecture utilizes HDFS for storage which splits files into 128MB blocks, stores replicas across racks for fault tolerance, and is managed by a resource manager and node managers that track hardware resources and heartbeats.