The document discusses two papers presented at NeurIPS 2018 on AutoML:
1) "Massively Parallel Hyperparameter Tuning" which proposes an asynchronous successive halving algorithm to parallelize hyperparameter tuning. This approach improves on synchronous successive halving by avoiding mis-promoting configurations.
2) "Neural Architecture Optimization" which uses a neural network to learn embeddings of neural network architectures for optimization. The approach achieves state-of-the-art results on CIFAR-10 and shows good transferability to other tasks.