Our Results
The MIT paper we based the project on [Wu et al. 2012] wrote their program in MATLAB but our group preferred programming in python. We used the scipy library to implement the filters and the opencv library to process the videos.
​
Before we amplified the motion we started with the task of amplifying the color changes in video that suggest motion. Our color amplification is a more basic implementation of the overall motion amplification that simply exaggerates the color changes the pixels experience temporally.
​
To process the input video our program first constructed the Laplacian pyramid with the levels being spatial frequency bands of the frames. These were then temporally filtered to amplify the motion in the range specified by the pass-band. And to get the final output we collapse the pyramid to recover the motion amplified video.
​
After checking if our program worked by comparing our output with the baby video we tested on our own data. We amplified the motion of a crane head, a swing, and the strings on a piano. The piano output was much worse than the others which we think is due to a couple possible factors. One, the background for the piano is quite complex, there are lots of criss crossing wires and the lighting was not bright enough. Two, our pass-band filter had a problem with higher frequency bounds that we were unable to resolve. But the other videos were successful in amplifying the motions though nosier than the results from the MIT paper.
​