Smartphones können via Spracheingabe Befehle empfangen und ausführen. Aber diese Befehle können akustisch auch derart geändert werden, dass sie das Smartphone immer noch als Spracheingabe identifiziert, das menschliche Ohr hingegen kaum noch wahrnimmt und eingeschleust bspw. in eine Musik gar nicht mehr registriert. Aber das Smartphone sehr wohl.
Voice interfaces are becoming more ubiquitous and are now the primary input method for many devices. We explore in this paper how they can be attacked with hidden voice commands that are unintelligible to human listeners but which are interpreted as commands by devices. We evaluate these attacks under two different threat models. In the black-box model, an attacker uses the speech recognition system as an opaque oracle. We show that the adversary can produce difficult to understand commands that are effective against existing systems in the black-box model. Under the white-box model, the attacker has full knowledge of the internals of the speech recognition system and uses it to create attack commands that we demonstrate through user testing are not understandable by humans. We then evaluate several defenses, including notifying the user when a voice command is accepted; a verbal challenge-response protocol; and a machine learning approach that can detect our attacks with 99.8% accuracy.