The document presents a large-scale experiment comparing 43 strategies for automated semantic document annotation across three datasets in economics, political science, and computer science. It evaluates various methods for concept extraction, activation, and annotation selection, identifying the best combination as entity × graph-based methods × knn. The findings suggest that using domain-specific knowledge bases significantly enhances performance, with graph-based concepts performing better than statistical and hierarchy-based methods.