Abe Davis has produced a video breakdown of his on-going PhD dissertation at MIT. In it, algorithms are used to extract an image-space representation of object structure from a video and then synthesise a plausible animations of objects responding to new unseen forces. Giving a user the ability to interact with an object from just a short video it.
Abe Davis demonstrates the technique and explains how it work.
The technique currently derives the structure from an image-space analysis of the object as it is deformed. This allows the software to project how it might respond to new forces.
Subscribe
If you enjoyed this article subscribe to our mailing list to receive weekly updates!
Davis explains that “one of the most important ways that we experience our environment is by manipulating it: we push, pull, poke, and prod to test hypotheses about our surroundings. By observing how objects respond to forces that we control, we learn about their dynamics [but] unfortunately, regular video does not afford this type of manipulation – it limits us to observing what was recorded. The goal of our work is to record objects in a way that captures not only their appearance, but their physical behaviour as well.”
Two major applications of this technology are the production of low-cost special effects for movies or allowing engineerings working on Structural Health Monitoring (SHM).
Structural Health Monitoring currently allows engineers to identify damage or changes to the material and/or geometric properties of a structural system (i.e. a bridge or building).
As the SHM process involves the observation of a system over time (using periodically sampled dynamic response measurements from an array of sensors) the process can be costly.
If this technique could be applied to it, it would enable video to be taken in a much shorter period of time and then used to extrapolate how a system would respond to forces, offering a completely new approach.