The document evaluates the quality of annotations from non-experts (Turkers) on Amazon Mechanical Turk for various natural language tasks compared to expert annotations. It finds that on average, 4 Turker annotations are required to match expert inter-annotator agreement. Tasks like word similarity and word sense disambiguation achieved high agreement with experts, while textual entailment had some disagreements requiring further analysis. Overall, Turker annotations provided a cheap, fast and relatively good source of labeled data compared to expert annotations alone.