This document summarizes a survey paper on neural word embeddings and language modeling. It discusses early word embedding models like word2vec and how later models targeted specific semantic relations or senses. It also describes how morpheme embeddings can capture sub-word information. The document notes datasets used to evaluate word embeddings, including similarity, analogy and synonym selection tasks. It concludes that human-level language understanding remains a challenge, but pre-trained language models have transferred knowledge through fine-tuning for specific tasks, while multi-modal models learn concepts through images like human language acquisition.