The document discusses evaluating decision-aware recommender systems by balancing precision, coverage, and correctness. It proposes a correctness metric adapted from question answering that gives credit to systems that decide not to make recommendations instead of incorrect ones. The authors apply this to collaborative filtering recommenders, introducing strategies for estimating prediction uncertainty based on nearest neighbors or probabilistic matrix factorization. Experiments show tighter uncertainty constraints decrease novelty and diversity but improve precision.