The document discusses analyzing the optimal number of trees to include in a random forest model. It experiments with growing random forests from 2 to 4096 trees, doubling the number of trees at each iteration. The main conclusions are: 1) increasing the number of trees does not always significantly improve performance and doubling trees is often worthless; 2) there appears to be a threshold where no significant gains occur without huge computational resources; and 3) as more trees are added, more attributes tend to be used, which may not be ideal for some domains like biomedicine. Density-based metrics of datasets are also proposed that may relate to the VC dimension of decision trees.