Bigdata and Hadoop are techniques for managing and analyzing large datasets. Hadoop is an open-source framework that allows processing of bigdata across clusters of computers. It addresses issues like high volume, velocity, and variety of bigdata by distributing storage and parallelizing processing. Hadoop uses MapReduce and HDFS to split data into parts, process it in parallel on different machines, and integrate the results for scalability.