This paper presents a holistic approach to distributed dimensionality reduction of big data, addressing key issues such as big data fusion, the development of a dimensionality reduction algorithm, and the construction of a distributed computing platform. It introduces a chunk tensor method for unifying heterogeneous data and a Lanczos-based high order singular value decomposition algorithm to efficiently extract core data. The proposed approach demonstrates improved efficiency in managing the challenges posed by the exponential growth of big data.