The document presents the 360-vsumm dataset, designed for the training and evaluation of 360° video summarization methods, addressing the growing interest in 360° video content. It details the dataset's generation and annotation processes, consisting of 40 diverse 2D videos with human-generated summaries, and evaluates conventional video summarization methods' applicability to 360° video data. Results highlight the necessity of tailored methods for 360° videos and demonstrate the effectiveness of using frame saliency in improving summarization performance.