This document describes a student project to build a basic web search engine. It discusses the key components of a search engine including a web crawler to scan websites and index their content, an indexer to organize the scanned content in a database, and a search interface to retrieve relevant results from the database based on keyword queries. The proposed project will implement a web crawler that crawls websites in a breadth-first manner, calculates page scores based on keyword and link counts, and returns the top 30 highest scoring pages to users in response to search queries. The goals of the project are to gain experience implementing basic search engine functionality through web crawling, indexing, and ranking algorithms.