This document proposes a method for continuously validating load test suites by comparing the signatures of events between load testing and production workloads. It generates signatures for unique users that count the events they trigger. It detects outliers by comparing signatures and inspects them to identify differences in issues, features, or intensities between test and production environments. This helps performance analysts identify when load testing fails to adequately represent evolving production workloads. The approach flags problematic events with higher precision than only comparing event frequencies.