This document summarizes a project to classify web pages as either ephemeral (short-lived) or evergreen (long-lived) content. The goal is to build a classifier using models like Naive Bayes, logistic regression, SVM, and random forests. Data is scraped from websites and preprocessed using techniques like bag-of-words and TF-IDF. Initial results show the SVM and random forest models performing best, with accuracies of around 86-80% respectively. Further work involves verifying outliers, ensemble methods, and applications in recommendation systems, archival projects, and targeted advertising.