The document discusses measuring quality in crowdsourced semantic interpretation tasks where there is disagreement between annotators. It introduces the concept of the "three sides of CrowdTruth" - representation, workers, and sentences. It shows that sentence quality and relation quality impact measurements of worker quality, and that considering these interdependencies can significantly improve worker metric accuracy in detecting spam annotators. Filtering both low-quality sentences and vague relations best separates high- and low-quality workers in evaluations.