This document describes a new attack called DolphinAttack that can inject inaudible voice commands into voice assistant systems like Siri and Google Now. The attack modulates voice commands onto ultrasonic frequencies above 20 kHz that are inaudible to humans but can still be demodulated by the microphones in smartphones and other devices. The document shows that inaudible commands can activate voice assistants and cause security issues like opening malicious websites, making unauthorized calls/videos, and manipulating navigation systems in cars. It validates the attack on major voice assistants and proposes both hardware and software defenses to mitigate the risk.