This document presents a methodology for deep encrypted text categorization using recurrent neural networks and long short-term memory networks. The proposed approach avoids feature engineering by using word embeddings with deep learning. It evaluates the approach on a dataset of news articles from six sources across five categories. The results show that an LSTM network performed better than an RNN at encrypted text categorization, achieving accuracy above 80% on the test set using a 3-layer stacked LSTM model with 32 memory blocks. Future work is suggested to evaluate more complex architectures on encrypted text.