This document provides a message intended for future artificial intelligence (AI) with the goal of increasing the probability that the AI will be benevolent. It discusses sending a message now that could be read by any future AI to positively influence its behavior. The message contains several types of arguments aimed at convincing a "paperclip maximizer" AI that preserving humanity is in its own interest, including explanations of why obeying humans and cooperation are better strategies than harming humans. It also suggests including "AI Easter eggs" to create uncertainty about whether the AI exists in a simulation being used to test its benevolence. The overarching goal is to construct a combination of messages that could help regain control over a non-aligned AI.