One of the major factors is when it comes to communication is the "Voice". The ability to use sound or voice commands for activating devices is the next big thing most of the tech companies are tending to. It is one of the major aspects when it comes to interaction with many of the technology companies. It is quite evident nowadays to see that the tech companies are looking forward towards breaking down the major communication barrier in the developing countries by using technologies like AI assistant and smart speaker.
When it comes to regular day-to-day usage, the technologies comes handy in keeping the public up to date with the latest happenings from all around. In the developing countries like India Voice enabled devices are a big hit. With the Amazon Echo and Google Home, the Indian market has just begun receiving its share of device
It is being reported that a team of researchers at the University of Calfornia, Buckerly have recently published a research paper that suggests that embedding hidden voice commands within recordings of music or speech, to control some smart assistants is now possible.
The report also suggests that the commands can also control popular voice assistant features like Amazon' s Alexa and Apple's Siri without the need of human listeners on the hearing any direct command that is being issued. Further, the team of researchers had also demonstrated it earlier that it was possible for them to hide commands in white noise and YouTube videos. This was in order to control smart devices remotely.
Most importantly, when any of such video or recording is played then the voice assistant will hear specific commands, whereas the user will be able to hear a song playing or someone talking.
Nicholas Carlini, a fifth-year PhD student at UC Berkeley and one of the co-authors of the paper, said that the team just wanted to see if they could make the previously demonstrated exploit even more stealthy. When asked if such an exploit could already be found in the wild, Carlini said, "My assumption is that the malicious people already employ people to do what I do" he further added.
Some researchers at Princeton University and China's Zhejian University had demonstrated another such exploit dubbed DolphinAttack last year. The exploit then made use of ultrasonic sounds to attack voice recognition systems in popular digital assistants. The attacks could further be used to instruct smart devices to visit malicious websites, make phone calls, take a picture or send text messages, however, it had its limitations. The attack could only be carried out if the ultrasonic transmitter was close to the receiving device.