Google is currently facing the heat from a number of world's biggest companies, who are annoyed that their ads have been coming before offensive videos on YouTube. As many brands have been pulling ad campaigns from the video sharing platform, Google has come up with a tool called Cloud Video Intelligence API which verifies the content of videos.
However, it looks like the search giant's video recognition AI can be easily fooled. Recently, a group of researchers from the University of Washington has made people aware of the problem. They found out that while the API can work very well against regular videos, it can be deceived by inserting single-frame images of a specific object at regular intervals.
To get their point across, the researchers chose a video of a tiger playing in a zoo and ran it through the Google's API system. It gave the video the tags "Animal," "Wildlife," "Zoo," "Nature," and "Tourism". Then the researchers inserted pictures of an Audi wagon at regular intervals.
After the modified video was again run through the tool, it said with certainty that the video should be given the labels "Audi", "Vehicle", "Car", "Motor Vehicle", and "Audi A4". Interestingly, despite the fact that just 1/50th of the video had images of the car and the rest was a tiger playing, there was no mention of anything related to the tiger.
It clearly shows that, even though big companies like Google and Facebook depend on artificial intelligence to sort and classify data, AI tools are never perfect.
However, Google's video recognition tool is still in its developing stages and is currently only available in private beta.