The document describes a factorial study that trained neural networks to perform regression tasks using differences between cases rather than raw data. It varied factors like the amount of training data, number of epochs, number of similar cases used to determine differences, and whether original features were included with differences. The study found that learning from differences generally required similar data amounts but converged faster. Adding original features was not always beneficial but never significantly hurt performance. The best settings depended on the specific task. Learning from differences showed potential but has limitations like difficulty scaling to large datasets.
Related topics: