MapReduce is a programming model for processing large datasets in a distributed system. It allows for automatic parallelization and distribution of work. The MapReduce model consists of a map step that processes key-value pairs to generate intermediate key-value pairs, and a reduce step that merges all intermediate values associated with the same intermediate key. As an example, a word count problem can be solved by mapping words to counts, then reducing by word to get the total count for each word. Hadoop is an open-source implementation of MapReduce that provides fault tolerance and locality optimizations for distributed processing of large datasets across clusters.