The document presents an empirical evaluation of bridging formal argumentation and natural language interfaces in distributed autonomous systems, emphasizing how reasoning should be presented to enhance human understanding. It details an experiment conducted across four domains with participants assessing the accuracy of arguments based on contradicted claims and preferences, revealing insights into how humans evaluate argument relevance and acceptability. The findings indicate a significant connection between formal argumentation systems and their representation in natural language, highlighting the importance of context and collateral knowledge in human decision-making.