The converted video can be played back over any 3D device -- a commercial 3D TV, Google's new Cardboard system which turns smartphones into 3D displays or special-purpose displays such as Oculus Rift.
"Any TV these days is capable of 3D. There's just no content. So we see that the production of high-quality content is the main thing that should happen," says Wojciech Matusik, associate professor of electrical engineering and computer science at MIT.
Today's video games generally store very detailed 3D maps of the virtual environment that the player is navigating. When the player initiates a move, the game adjusts the map accordingly and, on the fly, generates a 2D projection of the 3D scene that corresponds to a particular viewing angle.
The MIT and QCRI researchers essentially ran this process in reverse. They set the very realistic Microsoft soccer game "FIFA13" to play over and over again and used Microsoft's video-game analysis tool PIX to continuously store screen shots of the action. For each screen shot, they also extracted the corresponding 3D map.
Using a standard algorithm, they ruled out most of the screen shots, keeping just those that best captured the range of possible viewing angles and player configurations. Then they stored each screen shot and the associated 3D map in a database.
"The result is a very convincing 3D effect, with no visual artifacts," the authors noted. In the past, researchers have tried to develop general-purpose systems for converting 2D video to 3D but they haven't worked very well. "Our advantage is that we can develop it for a very specific problem domain," Matusik added.
"We are developing a conversion pipeline for a specific sport. We would like to do it at broadcast quality, and we would like to do it in real-time. What we have noticed is that we can leverage video games," he explained. The researchers presented the new system at the Association for Computing Machinery's Multimedia conference in Brisbane, Australia, last week.